-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: Target module Dropout(p=0.05, inplace=False) is not supported. Currently, only the following modules are supported: torch.nn.Linear
, torch.nn.Embedding
, torch.nn.Conv2d
, transformers.pytorch_utils.Conv1D
.
#2286
Comments
I thought it was a Python version issue, so I upgraded to 3.9, but the same error still occurs. So I guess that if a Dropout layer is not inside a Conv2D or Linear sequential layer and instead exists independently outside the sequential layer, dropout layer is not converted to lora_dropout As a temporary workaround, I added the following code at line 225 in peft/tuners/lora/model.py:
Although this is not a fundamental solution, it resolves the issue for now. |
Could you please show the code you use that results in the error? That way, we can better take a look at it. It appears like your Regarding dropout: Yes, you cannot target dropout layers with LoRA. That wouldn't really make sense, since dropout layers don't have any learnable parameters. Note that
This is generally a good idea, since Python 3.8 and below have reached their end of life and thus no longer receive security updates. |
Is it necessary to use a different method instead of get_peft_model to load a model trained with get_peft_model for further training? I'm sorry to bother you. |
Indeed, this is necessary. You should use from peft import PeftModel
base_model = ... # load llava model here
peft_model = PeftModel.from_pretrained(base_model, <path-to-saved-peft-model>) Then, if you want to further fine-tune this model, you have two options: You can pass Please try the suggestion and see if it solves your issue.
What is the result of |
As you mentioned, loading the LoRA fine-tuning model with The error occurred because Thank you !!! Here is the result of find_all_linear_names. def find_all_linear_names(model):
cls = torch.nn.Linear
lora_module_names = set()
multimodal_keywords = ['mm_projector', 'vision_tower', 'vision_resampler']
for name, module in model.named_modules():
if any(mm_keyword in name for mm_keyword in multimodal_keywords):
continue
if isinstance(module, cls):
names = name.split('.')
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if 'lm_head' in lora_module_names: # needed for 16-bit
lora_module_names.remove('lm_head')
return list(lora_module_names)
## result : ['default', 'base_layer'] |
System Info
Library version: PEFT==0.13.2, PyTorch==2.4.0, Transformers==4.46.3
Python version: 3.8.19
CUDA version: 12.6
I am trying to implement Low-Rank Adaptation (LoRA) in my model, but I encountered the following error when running the training script:
ValueError: Target module Dropout(p=0.05, inplace=False) is not supported. Currently, only the following modules are supported:
torch.nn.Linear
,torch.nn.Embedding
,torch.nn.Conv2d
,transformers.pytorch_utils.Conv1D
.It seems that the LoRA implementation currently does not allow for Dropout layers to be included as target modules. Could you provide guidance on how to properly handle dropout with LoRA or whether it will be supported in future updates?
Thank you for your assistance!
The text was updated successfully, but these errors were encountered: