-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calculated tokens much higher than actual #6
Comments
As a workaround, I've noticed when you request too many tokens, you get a 400 error very quickly for example This model's maximum context length is 8192 tokens. However, you requested 13674 tokens (7469 in the messages, 6205 in the completion). Please reduce the length of the messages or completion. So I parse of the messages token count and resubmit with a max_tokens calculated as follows: 8192 - 7469 - 1 |
Hi @Qarj! Thanks for flagging this problem. |
Thanks very much for addressing this! I will definitely use this feature in v2 when it is out. |
🎉 This issue has been resolved in version 2.0.0-beta.1 🎉 The release is available on: Your semantic-release bot 📦🚀 |
Thanks very much for this! Am using it already :) |
So it seems it is much closer now to the actual tokens, in a test I did the prompt was calculated as 998 tokens according to the library but 1003 tokens according to open ai. I suspect if we allow a 50 token margin then our completion token requests should always be within limit. |
Interesting. I wonder if there are 5 additional tokens that are set by OpenAI for each request? The algorithm should be exactly the same as OpenAIs. Thanks for investigating. |
🎉 This issue has been resolved in version 2.0.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
@Qarj added the new encodeChat function which should return correct values for chats! |
Thanks very much for this! :) |
Thanks for this. I've noticed a weird issue though both with this library and also the official code from open ai that I found a while back before gpt-4 came out.
What is happening is that the tokens calculated by this tool are much higher than the openai api is reporting in the completion. For example, a prompt I just submitted to gpt-4 was calculated as 7810 tokens by this library but when I got the completion from openai it told me my prompt had 5423 tokens. I'm not sure if you have also noticed something similar? In the prompt I'm submitting primarily Node JS code.
The text was updated successfully, but these errors were encountered: