This repository has been archived by the owner on Mar 6, 2024. It is now read-only.
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Support gpt 3.5 turbo 16k model (#424)
`TokenLimits` is only the place needed to be modified. Have set the token limits accordingly. Closes #406 <!-- This is an auto-generated comment: release notes by OSS CodeRabbit --> ### Summary by CodeRabbit **New Feature:** - Added support for the "gpt-3.5-turbo-16k" model in the `TokenLimits` class. - Set the `maxTokens` limit to 16300 and the `responseTokens` limit to 3000 for the new model. > 🎉 With tokens aplenty, we set the stage, > For the "gpt-3.5-turbo-16k" to engage. > More power, more wisdom, in every page, > A new chapter begins, let's turn the page! 🚀 <!-- end of auto-generated comment: release notes by OSS CodeRabbit -->
- Loading branch information