-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Looks like openai is using gpt-4 even though the config shows 3.5-turbo? #54
Comments
PS: I forgot to add - this LSP server is amazing. Thank you for making it! |
the files https://github.com/leona/helix-gpt/blob/master/src/providers/openai.ts and still reference gpt-4 in this block: const body = {
max_tokens: 7909,
model: "gpt-4",
n: 1,
stream: false,
temperature: 0.1,
top_p: 1,
messages
} I modified them in my local copy to use the model used in --openaiModel and --copilotModel and the issue with gpt-4 went away. I was experiencing the timeout issues mentioned in #18 as well and they also went away with the modificatin to github.ts Those lines also need to be modified to account for the lower maxtokens allowed in gpt-3.5 |
@salva-ferrer Thanks a lot. Does dpc@e5356d6 look OK? |
@dpc yes! that should fix the issues we experienced. |
It mostly did, yes. Thanks a lot! |
helix-editor version
helix 24.3 (b974716b)
helix-gpt version
helix-gpt-0.31-x86_64-linux
Describe the bug
I have the following config:
On the OpenAI usage dashboard, I'm noticing my gpt-4 usage increasing as I use helix.
helix-gpt logs
The only possibly relevant lots are as follows:
helix logs
No relevant helix logs
Does helix-gpt default to gpt-4 for certain actions? I'm really testing documentation generation.
The text was updated successfully, but these errors were encountered: