We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It is almost llama, except: logits_scale and different DecoderLayer
see Lightning-AI/litgpt#1089 ggerganov/llama.cpp#6033
The text was updated successfully, but these errors were encountered:
@choyakawa Hi
In my opinion, you can directly use CohereForAI/c4ai-command-r-v01 in xtuner.
CohereForAI/c4ai-command-r-v01
Just like the modification on #429
Apply CohereForAI/c4ai-command-r-v01 in your config, that is pretrained_model_name_or_path = 'CohereForAI/c4ai-command-r-v01'
pretrained_model_name_or_path = 'CohereForAI/c4ai-command-r-v01'
Expand the PROMPT_TEMPLATE according to the model and use it in your config, that is prompt_template = PROMPT_TEMPLATE.xxx
PROMPT_TEMPLATE
prompt_template = PROMPT_TEMPLATE.xxx
You can try it and if you encounter any problems, feel free to post them on this issue.
Sorry, something went wrong.
No branches or pull requests
It is almost llama, except: logits_scale and different DecoderLayer
see Lightning-AI/litgpt#1089 ggerganov/llama.cpp#6033
The text was updated successfully, but these errors were encountered: