-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to use this with local llms but there is no easy way to figure it out. #1295
Comments
Hey @dreemur99, you can select the exisitng models in the drop down, additionally you can also add models hosted on huggingface & replicate. We're working to add custom local llm support in the upcoming v0.0.14 release |
It's out guys, you can go and try it out ! docker-compose.yamlversion: '3.8' super__postgres: proxy: networks: Dockerfile :FROM nvidia/cuda:12.1.0-devel-ubuntu22.04 AS compile-image RUN apt-get update && apt-get install --no-install-recommends -y RUN apt-get update && RUN python3 -m venv /opt/venv COPY requirements.txt . RUN pip3 install -r requirements.txt COPY . . RUN chmod +x ./entrypoint.sh ./wait-for-it.sh ./install_tool_dependencies.sh ./entrypoint_celery.sh FROM nvidia/cuda:12.1.0-devel-ubuntu22.04 AS build-image RUN apt-get update && apt-get install --no-install-recommends -y ENV LLAMA_CUBLAS=1 RUN apt-get update && COPY --from=compile-image /opt/venv /opt/venv ENV PATH="/opt/venv/bin:$PATH" EXPOSE 8001 |
I'm trying to figure out how to run it on Windows 11. chatgpt Told me to try this:
celery: etc. but I get a 404 error" backend-1 | INFO: 172.27.0.7:45762 - "GET /models_controller/test_local_llm HTTP/1.0" 404 Not Found What am I missing here? |
I have same issue at linux with the exact same model! |
I think the llm isn't compatible! |
at my end at the backend I just saw the errorr but Idk how to solve it: superagi-backend-1 | gguf_init_from_file: invalid magic number 00000000 |
I do not see a dropdown to select a local LLM. I guess it's because no model is selected? Any idea how to get it working? I tried to add multiple gguf in the folder and adding it to the volumes. |
Ok, I see what I did wrong. You have to mount the gguf to the file local_model_path, not mount the directory to local_model_path.
|
dude i am a bit confused can you help me out with this maybe would be great since in no forum I got an answer or any help! so the thing is the my llm is in the path home/cronos/llms this is the docker-compose.yaml: version: '3.8' |
i don't get what I am doing wrong I just first did the exact thing like in the video but that wasn't working for me so I researched but wasn't finding any answer anywhere and comment some issues because some had a close same error to me so though they can help me but no answer! huh I just cant anymore I spent like 30hours to get it work! |
Look at the comments on the YouTube video and it turns out that no one
there was able to get it to work.
I think that if they manage to make it work locally, even at the level of
selecting a file from the computer, this is what will make the big leap in
the field!
I hope someone is working on it these days
בתאריך יום ב׳, 8 בינו׳ 2024, 08:46, מאת alfi4000 ***@***.***
…:
i don't get what I am doing wrong I just first did the exact thing like in
the video but that wasn't working for me so I researched but wasn't finding
any answer anywhere and comment some issues because some had a close same
error to me so though they can help me but no answer! huh I just cant
anymore I spent like 30hours to get it work!
—
Reply to this email directly, view it on GitHub
<#1295 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AELRYWIF4RGXDMLM75XCPFDYNOI5ZAVCNFSM6AAAAAA5MVUC5OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBQGQ2TSOJWG4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I mean that a simple user without technological knowledge can choose A file
that contains a model that he will select from the computer and will work
on all types of platforms, Windows/Linux/Mac, etc., etc.
בתאריך יום ב׳, 8 בינו׳ 2024, 10:34, מאת ***@***.***>:
… Look at the comments on the YouTube video and it turns out that no one
there was able to get it to work.
I think that if they manage to make it work locally, even at the level of
selecting a file from the computer, this is what will make the big leap in
the field!
I hope someone is working on it these days
בתאריך יום ב׳, 8 בינו׳ 2024, 08:46, מאת alfi4000 <
***@***.***>:
> i don't get what I am doing wrong I just first did the exact thing like
> in the video but that wasn't working for me so I researched but wasn't
> finding any answer anywhere and comment some issues because some had a
> close same error to me so though they can help me but no answer! huh I just
> cant anymore I spent like 30hours to get it work!
>
> —
> Reply to this email directly, view it on GitHub
> <#1295 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AELRYWIF4RGXDMLM75XCPFDYNOI5ZAVCNFSM6AAAAAA5MVUC5OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBQGQ2TSOJWG4>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
|
I don't get it. You can pick localllm in new model but you still can't point a path the local model you have? What's changed? Am I missing something bc I dont see any info on the main page. |
@yf007 |
lol at my end this file with gpu at the end doesn't exists let me check if there is an update available I could swear I am up to date! |
yeah I already have seen that and was wondering because some many people were exiting about it! but also no video only from SuperAGI how to get it to work locally! |
hehe lol now I was trying to get SuperAGI up to date and saw there isn't any button to update or command, now just replaced all files with the actual ones in the repository! Huh |
I now hope that it works! |
Lol now I got this error: Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]] What is that I dont get this why this happened!? |
version: '3.8' celery: volumes:- ./gui:/app- /app/node_modules/- /app/.next/super__redis: uncomment to expose redis port to hostports:- "6379:6379"
super__postgres: uncomment to expose postgres port to hostports:- "5432:5432"proxy: networks: that's the docker-compose-gpu.yml |
the llm ist stored in /home/kali/llms/ |
So now finally an error the model is in the directory: /home/kali/llms Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/home/kali/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf" to rootfs at "/app/local_model_path": mount /home/kali/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf:/app/local_model_path (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type Someone of the support might be feel free to help me I am using ubuntu server 22.04.3! |
@alfi4000 hey are you sure that this path exists: "/home/kali/llms/dolphin-2-5-mixtral-8x7b-Q2_K.gguf" ? |
have you the model inside of the SuperAGI folder? I have it outside of the SuperAGI folder! that's might be the thing I am doing wrong! |
I hope I am not too much confusing! |
I just saw that it has in the SuperAGI folder has it created that path just without the model exactly the path I wrote in the docker compose file I told to my self whate f*** is here going on! |
I am finally done with it I wasted now 1hour to try a few things but no solution so maybe someone here answers or I will check in a few months the repo if there is an update that there is an easter way to do that, I will check the next few days for answers then I am done with it if no ones say something ! |
no your model doesnt need to be in the superagi folder, it can be anywhere. |
I think the vicuna model is might be the only one yet supported or I did something wrong what I don't know! |
@alfi4000 |
Look at the second screenshot I had changed the path to the correct one! |
I will try 2 different versions of that model and reach you back just to make sure that the model isn't the problem! |
Here is a screen video recording that might help you : https://youtu.be/_u-8bwoKHQc |
in the video i saw that you were getting error: str can't be interpreted as integer |
i have done it I replaced the hole superego folder with every file in it with the latest repo files but look by your self the log!: superagi-backend-1 | llama_model_loader: loaded meta data with 25 key-value pairs and 995 tensors from /app/local_model_path (version unknown) is might be the cpp version the problem: https://github.com/abetlen/llama-cpp-python/releases the latest is 0.2.31 not like in SuperAGI 2.7 |
Lol with this vicuna model it worked: https://huggingface.co/TheBloke/vicuna-13B-v1.5-GGUF |
This is what I learned so far.... and nothing works. Good luck to anyone willing to carry the torch forward until someone gets it running on Windows....... See my dev comment below. Windows Setup:
docker compose -f docker-compose-gpu.yml up --build ---- EXAMPLE of docker-compose.yaml ---- (example from 1/28/24 for Windows 11): version: '3.8' I seriously recommend an easier way to VIEW local models within the Add Models tab (populate it from the local_model_path dir!! Even if unsupported, it should list them in the drop down. The TEST button errors out for me, so I spent 2 hours attempting to debug but this is ridiculous. One last attempt will be to try a third llm... |
models_controller.py:203 Error:
|
The option for setting up your own model is there but you cant point it to an actually directory or llm file directly? Why not?
The text was updated successfully, but these errors were encountered: