This is frapuse (short for frappante muse) an android app project that I created as my final assignment for the Module 3 - Android App Development course at Syntax Institut.
-
The app is a mobile user interface for the text-generation-webui, where you can chat with a localy hosted llm.
-
The app is a minimal user interface for the stable-diffusion-webui. You can either use it through the keyword
generate
followed with the desired prompt for image generation or use it directly from a dedicated fragment. -
The app allows the upload of PDFs which are localy stored with elasticsearch and accessed through an haystack pipeline. In order to activate the extension you need to activate the checkbox.
-
The app uses Kotlin as the programming language and follows the MVVM architecture pattern.
-
Prerequisites
-
In order to use the text generation capabilities you need to install oobabooga/text-generation-webui.
- Installation
- Follow this issue in order to set up old API extension (Needed in order to work!)
-
For the use of image generation you also need to install AUTOMATIC1111/stable-diffusion-webui.
-
If you want to extract information of PDFs and chat about the context with your llm you also need to install deepset-ai/haystack. (Note: In order to use this feature you probably are going to need to adjust the pipeline. With no previous experience it can be quite tidious.)
-
-
To run the app, you need to have Android Studio installed on your computer.
-
Clone this repository and open it in Android Studio.
-
Place the links for each api inside the according file.
-
Run App on your Android Phone.
- IMPORTANT: In order for this App to work you need to follow this issue from oobabooga/text-generation-webui and install/enable the old API Extension.
- IMPORTANT: To clear the chat history you have to long press the send button in chat next to the prompt text field.
- Sometimes, when the phone is under heavy load, some of the first tokens are omitted.
- The prompt template is adjusted to vicuna (most of the testing was performed with wizard-vicuna and its varying models), changing the template is a tidious task and also has to be done directly from inside the code. Note that other models still can be used but wont perform at their best. (Fix on next Update)
- Currently it is only possible to stream the response (Note: You still need to adjust the blocking api address to make the app work!).
- The prompt examples for image generation within the chat must be adjusted inside of the code.
- It is not possible to save generated images larger than the size of 768x768. If the width OR height is adjusted beyond this value, the other must be adjusted accordingly.
- The code is not optimized and at parts it can be a bit messy. Please excuse this, I am still learning and improoving my code style and habbits.