Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is llama2 model finetuned on all three stages? #14

Open
Yang-bug-star opened this issue Mar 3, 2024 · 1 comment
Open

Is llama2 model finetuned on all three stages? #14

Yang-bug-star opened this issue Mar 3, 2024 · 1 comment

Comments

@Yang-bug-star
Copy link

In the original paper, it claims that in the first phase, all parameters, with the exception of those associated with the Multi-modal Understanding Adapters, undergo freezing. In my understanding, llama2 should only be fine-tuned in the third stage, but in the code it seems that llama2 is fine-tuned in lora in all three stages because llama and lora appear in the trainable parameter names in all three stages of the 'get_trainable_params' function

@Yang-bug-star
Copy link
Author

Also, the paper says that in the final training stage, the LoRA training strategy is employed to train the LLaMA 2 model, concurrently finetuning the Multi-modal Understanding Adapter and Output Projection layer. But the parameters of output projection layer and adapter are not trained in the third stage in the given code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant