-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Custom Adapters #2273
Comments
You are correct, when adding a new method, besides creating a new directory inside of Regarding the change in
No, but as mentioned, I'll look into this, I had planned to facilitate this for a long time. The only thing we have right now is a way to add new custom LoRA layers, but not yet to add completely new adapter types. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
not stale, see #2282 |
@dgme-syz The PR is merged, adding a new PEFT method is now much simplified. To quote myself: Ignoring tests, docs, and examples, we have the additions to
With the changes in this PR, all these steps can be omitted. On top of that, we also have the re-imports to To register a new PEFT method, you should now add this to from peft.utils import register_peft_method
register_peft_method(name="my_peft_method", config_cls=MyConfig, model_cls=MyModel) |
Feature request
In simple terms, I would like support that allows users to customize their own adapter. I noticed that users only need to add a folder under this path src/peft/tuners and place some adapter-related files, usually
config.py
,layer.py
, andmodel.py
.However, during implementation, I found that I also need to modify
src/peft/utils/save_and_load.py/get_peft_model_state_dict
to ensure that the custom adapter can be saved correctly. This is because the function is currently only adapted for existing adapters, so I have to modify the source code to ensure that the custom adapter can be used successfully.PEFT is the most convenient and efficient fine-tuning library, and it would be even better if this feature were supported. Perhaps you’ve already implemented this functionality, but I haven’t found it yet. If so, please point it out. Thank you very much.
Motivation
I hope to use custom adapters to fine-tune large language models.
Your contribution
Currently, I have no clear ideas.
The text was updated successfully, but these errors were encountered: