We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
这是合并的代码 from peft import AutoPeftModelForCausalLM path_to_adapter = "/opt/Qwen-VL/Qwen-VL-master/output_qwen" model = AutoPeftModelForCausalLM.from_pretrained( path_to_adapter, # path to the output directory device_map="auto", trust_remote_code=True ).eval()
merged_model = model.merge_and_unload()
merged_model.save_pretrained(new_model_directory, max_shard_size="2048MB", safe_serialization=True)
如何解决呢?或者有没有人告诉我Qwen-VL-Chat-Int4 如何微调,然后使用微调后的模型呢?
No response
- OS: - Python: - Transformers: - PyTorch: - CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
The text was updated successfully, but these errors were encountered:
No branches or pull requests
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
这是合并的代码
from peft import AutoPeftModelForCausalLM
path_to_adapter = "/opt/Qwen-VL/Qwen-VL-master/output_qwen"
model = AutoPeftModelForCausalLM.from_pretrained(
path_to_adapter, # path to the output directory
device_map="auto",
trust_remote_code=True
).eval()
merged_model = model.merge_and_unload()
max_shard_size and safe serialization are not necessary.
They respectively work for sharding checkpoint and save the model to safetensors
merged_model.save_pretrained(new_model_directory, max_shard_size="2048MB", safe_serialization=True)
期望行为 | Expected Behavior
如何解决呢?或者有没有人告诉我Qwen-VL-Chat-Int4 如何微调,然后使用微调后的模型呢?
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
备注 | Anything else?
No response
The text was updated successfully, but these errors were encountered: