-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
limit? #198
Comments
Which model do you use? |
I tried with wizard vicuna 30b and now with LLaMA2-13B-Psyfighter2 and I got the same problem, which uncensored model would be the ideal? |
Hard to say... As I know, default LLaMA2 support 4096 token length, but Llama2 forks support up to 16k tokens, so I dont know about LLaMA2-13B-Psyfighter2. I'll try to test it later, perhaps will find something. |
Thanks for your response, what model do you recommend? |
The bot works but when the conversation passes 4000 tokens the bot becomes unstable in its responses I already changed these parameters: truncation_length chat_prompt_size, but still with the same problem
The text was updated successfully, but these errors were encountered: