Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to merge lora weight when there are more than 1 model.bin #90

Open
hxx-who opened this issue Dec 11, 2023 · 1 comment
Open

how to merge lora weight when there are more than 1 model.bin #90

hxx-who opened this issue Dec 11, 2023 · 1 comment

Comments

@hxx-who
Copy link

hxx-who commented Dec 11, 2023

Hi, great work!
I want to Merge LoRA Weight of LISA-7B by using the following code

python merge_lora_weights_and_save_hf_model.py \
  --version="./LLaVA/LLaVA-Lightning-7B-v1-1"\
  --weight="PATH_TO_pytorch_model.bin" \
  --save_path="./LISA7B"

but the --weight only receives the path to one model.bin,but the model of lisa7B has two as pytorch_model-00001-of-00002.bin and pytorch_model-00002-of-00002.bin,how can I merge them?

@hxx-who
Copy link
Author

hxx-who commented Dec 15, 2023

I aim to train on the base of LISA-7B, and it seems like there's no need to merge.
I tried to set the '''--version''' as the model path of LISA-7B and follow the debug instruction in #85, it finally works though #85 also doesn't know why changing that line would be useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant