-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there anyway to route the plugin to a privateGPT instance hosted locally on the same computer or on a local network? #130
Comments
Seconded but for something like Oobabooga (or Ollama once they get on windows)... or stronger integration with GPT4All if their limits can be tolerated. Not sure about LocalChat and Local AI |
llama.cpp's server provides a more or less openai compatible api, so making the api url configurable might be all thats needed. |
This should already be compatible with an openai compatible endpoint, even one locally. I do have ollama but haven't gotten around to testing it yet, but it should just be updating the settings to point at a local api endpoint. If someone wanted to test and let me know if there are any limitations or anything else needed, that would be great! |
Can confirm this works with "openAIKey": "",
"openAICompletionEngine": "llama3.2",
"chatCompletionEndpoint": "http://localhost:11434/v1", Don't forget to pull the model you want to use separately. Haven't done any extensive testing because the only thing I'd want a plugin like this is for generating summaries, and #75 is needed for that. |
I love the idea of using this plugin on an offline LLM instead of giving my data to the cloud. Are there any suggestions on where to look in this code and other resources to kludge something together to use privateGPT instead of openAI?
I didn't see anyone else asking this and hope this is the right spot to ask it.
The text was updated successfully, but these errors were encountered: