-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: loading qwen2-vl-7b fails with error: assert "factor" in rope_scaling
#8388
Comments
The specific issue is : rope_scaling["type"] - key is being overridden to "default" even if it is initially set to "mrope". Try : if self.rope_scaling["type"] != "mrope": This way, the original value of "mrope" will be preserved, allowing the model to open correctly. |
Uh is this an AI reply? Because the solution doesn't make sense... |
Which version of |
Got it yea now I see it was a recent change to transformers (using main branch), thanks! |
Your current environment
The output of `python collect_env.py`
🐛 Describe the bug
The recent qwen2-vl merge added a check for rope_type ->
if rope_type == "mrope"
: 3b7fea7#diff-7eaad0b7dee0626bf29d10081b0f0c5e3ea15a4af97e7b182a4e0d35f8346953R1736But huggingface is overriding this key to be set to "default" for some reason:
https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_vl/configuration_qwen2_vl.py#L240
Do you know what is correct way to load model?
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: