-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[please test] BYOK with ollama #342
Comments
@olegklimov I would like to take this up. Can you please some docs/example of how this can done? Do we need to test the integration here or make changes as well to make it work? |
Oh, here https://docs.refact.ai/byok/ you can test if we have documentation that is any good :D |
Hi @pardeep-singh |
chat_endpoint: "http://localhost:11434/v1/chat/completions"
|
chat_endpoint: "http://localhost:11434/v1/chat/completions" Error: Bad Request
Why it is sending to different port? |
VSCode extension talks to I think the fastest way we can fix this -- is reproduce your setup. So you have, windows, ollama with llama3.2:1b-instruct-q8_0 right, we'll try it. |
With the ollama project it's easy to host our own AI models.
You can set up bring-your-own-key (BYOK) to connect to ollama server, and see if you can use StarCoder2 for code completion, llama models for chat.
Does it work at all? What we need to fix to make it better?
The text was updated successfully, but these errors were encountered: