We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,基础模型选择qwen2p5_instruct,采用finetuneTaskNeg_qwen.sh,当开启lora微调时报错:
Adding LoRA adapters... [rank0]: Traceback (most recent call last): [rank0]: File "/root/autodl-tmp/jsh/projects/VITA-main/vita/train/train.py", line 467, in [rank0]: train() [rank0]: File "/root/autodl-tmp/jsh/projects/VITA-main/vita/train/train.py", line 345, in train [rank0]: model = get_peft_model(model, lora_config) [rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/mapping.py", line 222, in get_peft_model [rank0]: return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type]( [rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/peft_model.py", line 1684, in init [rank0]: super().init(model, peft_config, adapter_name, **kwargs) [rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/peft_model.py", line 176, in init [rank0]: self.base_model = cls(model, {adapter_name: peft_config}, adapter_name) [rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 141, in init [rank0]: super().init(model, config, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage) [rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 184, in init [rank0]: self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage) [rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 501, in inject_adapter [rank0]: self._create_and_replace(peft_config, adapter_name, target, target_name, parent, current_key=key) [rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 235, in _create_and_replace [rank0]: new_module = self._create_new_module(lora_config, adapter_name, target, **kwargs) [rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 360, in _create_new_module [rank0]: raise ValueError( [rank0]: ValueError: Target module Qwen2DecoderLayer( [rank0]: (self_attn): Qwen2FlashAttention2( [rank0]: (q_proj): Linear(in_features=3584, out_features=3584, bias=True) [rank0]: (k_proj): Linear(in_features=3584, out_features=512, bias=True) [rank0]: (v_proj): Linear(in_features=3584, out_features=512, bias=True) [rank0]: (o_proj): Linear(in_features=3584, out_features=3584, bias=False) [rank0]: (rotary_emb): Qwen2RotaryEmbedding() [rank0]: ) [rank0]: (mlp): Qwen2MLP( [rank0]: (gate_proj): Linear(in_features=3584, out_features=18944, bias=False) [rank0]: (up_proj): Linear(in_features=3584, out_features=18944, bias=False) [rank0]: (down_proj): Linear(in_features=18944, out_features=3584, bias=False) [rank0]: (act_fn): SiLU() [rank0]: ) [rank0]: (input_layernorm): Qwen2RMSNorm((0,), eps=1e-06) [rank0]: (post_attention_layernorm): Qwen2RMSNorm((0,), eps=1e-06) [rank0]: ) is not supported. Currently, only the following modules are supported: torch.nn.Linear, torch.nn.Embedding, torch.nn.Conv2d, torch.nn.Conv3d, transformers.pytorch_utils.Conv1D. [2025-01-22 11:54:01,033] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 789757 [2025-01-22 11:54:01,033] [ERROR] [launch.py:325:sigkill_handler] ['/root/autodl-tmp/jsh/environment/env/vita/bin/python', '-u', 'vita/train/train.py', '--local_rank=0', '--deepspeed', './script/deepspeed/zero3.json', '--model_name_or_path', '/root/autodl-tmp/jsh/models/VITA-1.5', '--model_type', 'qwen2p5_instruct', '--version', 'qwen2p5_instruct', '--dataset_use', 'Pretrain_video', '--vision_tower', '/root/autodl-tmp/jsh/models/InternViT-300M-448px', '--mm_projector_type', 'mlp2x_gelu', '--audio_encoder', '/root/autodl-tmp/jsh/models/VITA-1.5/audio-encoder-Qwen2-7B-1107-weight-base-11wh-tunning', '--freeze_audio_encoder', 'True', '--freeze_audio_encoder_adapter', 'False', '--image_aspect_ratio', 'square', '--group_by_modality_length', 'False', '--bf16', 'True', '--output_dir', '/root/autodl-tmp/jsh/projects/VITA-main/save/llava-s3-finetune_task_neg', '--num_train_epochs', '1', '--per_device_train_batch_size', '1', '--per_device_eval_batch_size', '1', '--gradient_accumulation_steps', '8', '--evaluation_strategy', 'no', '--save_strategy', 'steps', '--save_steps', '500', '--save_total_limit', '1', '--learning_rate', '2e-5', '--weight_decay', '0.', '--warmup_ratio', '0.03', '--lr_scheduler_type', 'cosine', '--logging_steps', '1', '--tf32', 'True', '--model_max_length', '2048', '--gradient_checkpointing', 'True', '--dataloader_num_workers', '1', '--dataloader_pin_memory', 'False', '--lazy_preprocess', 'True', '--report_to', 'none'] exits with return code = 1 Done.
torch.nn.Linear
torch.nn.Embedding
torch.nn.Conv2d
torch.nn.Conv3d
transformers.pytorch_utils.Conv1D
The text was updated successfully, but these errors were encountered:
已解决,不支持所有层,需要指定特定层lora
Sorry, something went wrong.
No branches or pull requests
您好,基础模型选择qwen2p5_instruct,采用finetuneTaskNeg_qwen.sh,当开启lora微调时报错:
Adding LoRA adapters...
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/autodl-tmp/jsh/projects/VITA-main/vita/train/train.py", line 467, in
[rank0]: train()
[rank0]: File "/root/autodl-tmp/jsh/projects/VITA-main/vita/train/train.py", line 345, in train
[rank0]: model = get_peft_model(model, lora_config)
[rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/mapping.py", line 222, in get_peft_model
[rank0]: return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](
[rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/peft_model.py", line 1684, in init
[rank0]: super().init(model, peft_config, adapter_name, **kwargs)
[rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/peft_model.py", line 176, in init
[rank0]: self.base_model = cls(model, {adapter_name: peft_config}, adapter_name)
[rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 141, in init
[rank0]: super().init(model, config, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
[rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 184, in init
[rank0]: self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
[rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 501, in inject_adapter
[rank0]: self._create_and_replace(peft_config, adapter_name, target, target_name, parent, current_key=key)
[rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 235, in _create_and_replace
[rank0]: new_module = self._create_new_module(lora_config, adapter_name, target, **kwargs)
[rank0]: File "/root/autodl-tmp/jsh/environment/env/vita/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 360, in _create_new_module
[rank0]: raise ValueError(
[rank0]: ValueError: Target module Qwen2DecoderLayer(
[rank0]: (self_attn): Qwen2FlashAttention2(
[rank0]: (q_proj): Linear(in_features=3584, out_features=3584, bias=True)
[rank0]: (k_proj): Linear(in_features=3584, out_features=512, bias=True)
[rank0]: (v_proj): Linear(in_features=3584, out_features=512, bias=True)
[rank0]: (o_proj): Linear(in_features=3584, out_features=3584, bias=False)
[rank0]: (rotary_emb): Qwen2RotaryEmbedding()
[rank0]: )
[rank0]: (mlp): Qwen2MLP(
[rank0]: (gate_proj): Linear(in_features=3584, out_features=18944, bias=False)
[rank0]: (up_proj): Linear(in_features=3584, out_features=18944, bias=False)
[rank0]: (down_proj): Linear(in_features=18944, out_features=3584, bias=False)
[rank0]: (act_fn): SiLU()
[rank0]: )
[rank0]: (input_layernorm): Qwen2RMSNorm((0,), eps=1e-06)
[rank0]: (post_attention_layernorm): Qwen2RMSNorm((0,), eps=1e-06)
[rank0]: ) is not supported. Currently, only the following modules are supported:
torch.nn.Linear
,torch.nn.Embedding
,torch.nn.Conv2d
,torch.nn.Conv3d
,transformers.pytorch_utils.Conv1D
.[2025-01-22 11:54:01,033] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 789757
[2025-01-22 11:54:01,033] [ERROR] [launch.py:325:sigkill_handler] ['/root/autodl-tmp/jsh/environment/env/vita/bin/python', '-u', 'vita/train/train.py', '--local_rank=0', '--deepspeed', './script/deepspeed/zero3.json', '--model_name_or_path', '/root/autodl-tmp/jsh/models/VITA-1.5', '--model_type', 'qwen2p5_instruct', '--version', 'qwen2p5_instruct', '--dataset_use', 'Pretrain_video', '--vision_tower', '/root/autodl-tmp/jsh/models/InternViT-300M-448px', '--mm_projector_type', 'mlp2x_gelu', '--audio_encoder', '/root/autodl-tmp/jsh/models/VITA-1.5/audio-encoder-Qwen2-7B-1107-weight-base-11wh-tunning', '--freeze_audio_encoder', 'True', '--freeze_audio_encoder_adapter', 'False', '--image_aspect_ratio', 'square', '--group_by_modality_length', 'False', '--bf16', 'True', '--output_dir', '/root/autodl-tmp/jsh/projects/VITA-main/save/llava-s3-finetune_task_neg', '--num_train_epochs', '1', '--per_device_train_batch_size', '1', '--per_device_eval_batch_size', '1', '--gradient_accumulation_steps', '8', '--evaluation_strategy', 'no', '--save_strategy', 'steps', '--save_steps', '500', '--save_total_limit', '1', '--learning_rate', '2e-5', '--weight_decay', '0.', '--warmup_ratio', '0.03', '--lr_scheduler_type', 'cosine', '--logging_steps', '1', '--tf32', 'True', '--model_max_length', '2048', '--gradient_checkpointing', 'True', '--dataloader_num_workers', '1', '--dataloader_pin_memory', 'False', '--lazy_preprocess', 'True', '--report_to', 'none'] exits with return code = 1
Done.
The text was updated successfully, but these errors were encountered: