-
-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vllm can't run peft model? #1129
Comments
vLLM does not support PEFT/LoRA yet, but it is on the Development Roadmap |
ok thanks! |
You have to merge adapter from peft with their base model first. |
@hllj Hi there, does this mean the merged model either saving on local or pushing to the hugging face will have the same structure as the base model which means will have similar files and versions as the base model in hugging face? |
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you! |
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you! |
Hi, does vllm runs peft models? I'm trying to run them but im getting an error that the config.json is not available in the model main forlder
The text was updated successfully, but these errors were encountered: