-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed with deepseek-r1 #18
Comments
have the same issue ~ |
For me, I was using It is 8b, and I did try to add |
Strange, I assume this is an issue w/ the local model failing to produce JSON object w/ the correct This can indeed happen. I just tried w/ 1.5b and it was OK. But, I could absolutely see cases where it fails to a JSON object w/ I will try to improve the prompt now. |
@meefen if easy, can you add a print statement in
I want to see what you are getting. |
FWIW, I also just updated the prompt. Pls pull and see if you still see this problem: |
same issue for me. it''s good when using llama3.1:8b, but failed when using deepseek r1:7b or 1.5b. same error. 2025-01-31T06:17:13.432797Z [error ] Background run failed [langgraph_api.queue] api_variant=local_dev run_attempt=1 run_created_at=2025-01-31T06:16:56.643864+00:00 run_ended_at=2025-01-31T06:17:13.431770+00:00 run_exec_ms=16337 |
did you pull latest? if so, can log:
and send here? |
pls check the follwoing. 2025-01-31T06:35:06.092941Z [error ] Traceback (most recent call last): |
This means that there is no output from your local LLM call.
1/ you are sure that Ollama app is running? |
yes, ollama is running |
I have the same issue. All other ollama models work except for deepseek r1. I am running deepseek-r1:latest the 7billion model. As for the log output, the following: } As you can see, it is all empty. Exactly as pasted here. |
Very strange. Small point: you mean Deepseek 8b right? What's the exact model name you are specifying? |
It is the 7b model, 4.7gb, there is a 8b model, but it is not the one I am using. |
Just as a side note. I don`t see anywhere where you provide the api endpoint for ollama. I had to include it manually to get the code working. base_url="http://127.0.0.1:11434" |
I'm running the app on Mac and have not needed to supply. I will check with the Ollama folks. |
I am running on a windows system, maybe that is the difference. I kept getting a ConnectError: WinError 10049 Here is the output of chatgpt regarding the issue: OK, let me see: "format_exc_info" should be removed for pretty exceptions. The crucial issue is httpx ConnectError: WinError 10049, indicating an incorrect or missing IP address configuration. Pinpointing the issue I’m digging into the httpx ConnectError: WinError 10049 error. It’s likely due to an incorrect IP address in the httpx request or a misconfigured host. Fixing issues To address the structlog warning, remove format_exc_info from the processors. For the httpx connection error, validate the host and port parameters. Fixing the address To resolve the httpx connection error, ensure the host address is correctly specified in the langchain_ollama config, possibly in ollama client instantiation or relevant methods in langchain_ollama/chat_models.py. |
Thank you for this repository. It is great! |
Ollama was written for Mac, so everything will work correctly for it out of the box. |
How to run ChatOllama out of the app? I got the same issue in mac, and ollama was running.
|
Thanks for the nice repo!
I was able to run it with llama3.1 but encountered an error with generate_query when running on deepseek-r1. Any reason why?
The text was updated successfully, but these errors were encountered: