-
-
Notifications
You must be signed in to change notification settings - Fork 6.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPT4ALL support or open source models #49
Comments
Since gpt4all now has a local server mode that emulates OpenAI’s API calls shouldn’t we be able to just overwrite ai.py’s calls with python calls to the gpt4all local server instead of openai? Or is it more complicated than that? |
I think you are on the right track @teddybear082 |
See #63 for Abstraction of the |
gpt-llama claim to be a dropin replacement for chatGpt application. So is there a way to change the api url to local host? |
Maybe this should be a discussion rather than an issue, feel free to start one if you are still interested. |
OpenAI's model 3.5 breaks frequently and is low quality in general.
Falcon, Vicuna, Hermes and more should be supported as they're open source, free, and moving away from paid closed source is good practice and opens applications to huge user base who wants free access to these tools.
The text was updated successfully, but these errors were encountered: