We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No description provided.
The text was updated successfully, but these errors were encountered:
目前finetune.py是不支持多卡的 multi版本严格意义上说也不是多卡 虽然能调用多张GPU训练 但其实是手动映射了device 如果需要多卡训练需要把代码修改成torchrun分布式
Sorry, something went wrong.
好的,谢谢,目前用multi起来了,但是跑到一半服务器断网了,请问有什么办法从output/checkpoint续跑
从checkpoint中用peft的方式加载最新的lora.pt训练即可。目前已支持deepspeed。
No branches or pull requests
No description provided.
The text was updated successfully, but these errors were encountered: