-
Notifications
You must be signed in to change notification settings - Fork 7.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new model type to use custom OpenAI api compatible servers #1692
Conversation
These other servers would all expose OpenAI compatible API then? Do you have a specific example you're testing with? |
I'm not sure to understand the question. There is a whole bunch of software that uses the OpenAI API as a reference implementation. The subject of this pull request is the possibility of using GPT4all as a client of these softwares as it already does with OpenAI. (I'm sorry if my english is poor, i'm not strong in foreign languages) |
// else if (openaiModel.text === "") | ||
// openaiBase.showError(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should do something with this commented-out code - either remove it if it isn't needed, or uncomment it if it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right.
Most implementations force the model name to match betwen the client and the server.
But In my own implementation, I prefer to leave this blank because I plan to serve only one model per instance, so the model name is not important.
So it is probably better to let the user choose.
I think I'm asking for something that is a subset of this: #1955 🙏 |
Describe your changes
I added a new model type similar to OpenAI, but with the possibility to define the API entry point and the model name in order to allow using custom OpenAI compatible servers like those of vllm or llama.cpp
Issue ticket number and link
Checklist before requesting a review
Notes
Model settings are stored in chatgpt-custom.txt which contains 3 lines:
A future release may use QSettings, Json library or any other to store these values in a dict for safety