-
Notifications
You must be signed in to change notification settings - Fork 579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Training Guidelines #150
Conversation
…aca-2 * 'main' of https://github.com/iMountTai/Chinese-LLaMA-Alpaca-2: Update README_EN.md Update README.md Update README.md
--lora_dropout ${lora_dropout} \ | ||
--torch_dtype float16 \ | ||
--validation_file ${validation_file} \ | ||
--peft_path ${peft_model} \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please explain more on the modifications?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
peft_path
与lora相关训练参数互斥,因此以设置lora可训练参数为例。
scripts/training/run_pt.sh
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some users may use the script to pre-train the model from scratch.
But with these modifications, the default behavior of script is continual training chinese-llama-2.
I think it may lead to some confusions.
I would advise keeping modules_to_save
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with your suggestion.
Description
This PR adds the following features:
device_map
parameter to address the issue of insufficient CPU memory during training.Related Issue
#110 #27