-
Notifications
You must be signed in to change notification settings - Fork 27.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TF: clearer model variable naming #16051
Comments
My favorite sport is deleting code 🦾 I would be interested in taking |
Hi..would love to work on |
I'd like to work on gpt2! |
I'll take longformer! |
I can take CLIP and OpenAI. |
Hi, |
I'll take |
I'll take |
I'll work on |
I'll work on |
electra, tapas, deberta v1 and v2, pegasus, xlnet |
@Abdelrhman-Hosny The user doing the |
I have just made PR for |
I'll do Also I found that the |
I can also do |
I would like to work on xlm_roberta |
working on wave2vec2 |
I would like to work on |
working on blenderbot |
working on blenderbot_small |
I'll do |
I'll work on convbert. |
|
Created PR for |
@robotjellyzone are you still intending to do DistilBERT? If not, i'd like to take that one, please. |
Hi @jmwoloso , actually I am done with the modifications in the distilbert but got busy in my job. Tomorrow will going to push the changes. |
Hi @gante I don't seem to see any |
Hi @bhavika since you already worked on two models if you don't mind please I'd like to submit PR for |
@silvererudite it is possible that it doesn't have the function, I listed all TF models :) |
Go ahead @silvererudite |
@gante I checked and there does not seem to be any |
Everyone, thank you so much for contributing to this issue 🧡 Together, we deleted >4000 lines of code, making the TF codebase more readable and easier to maintain 💪 Tomorrow I will be updating any remaining architecture and closing this issue -- I'm assuming that claimed architectures will not be worked on this week. If you've claimed an architecture and are still interested in working on it, please let me know :) Finally, if you've enjoyed this experience, there are other similar initiatives going on! Check issues with the |
This issue is part of our Great Code Cleanup 2022. If you're interested in helping out, take a look at this thread, or come join us on Discord and talk with other contributors!
As introduced in #15908 and implemented in #15907, we now have a new
@unpack_inputs
decorator to unpack TensorFlow modelcall()
arguments. In essence, if we apply the decorator, we can replaceinputs["foo"]
withfoo
, making the code for the layer/model much shorter and clearer.This issue is a call for contributors, to implement the new decorator in the architectures below. If you wish to contribute, reply in this thread which architectures you'd like to take :)
Guide to contributing:
src/transformers/models/[model_name]/modeling_tf_[model_name].py
input_processing
call, remove it and add the@unpack_inputs
decoratorinputs
variable in those functions (e.g.inputs["foo"]
->foo
)RUN_SLOW=1 py.test -vv tests/[model_name]/test_modeling_tf_[model_name].py
💻make fixup
before your final commit) 🎊# Copied from transformers.models.bert...
, this means that the code is copied from that source, and our scripts will automatically keep that in sync. If you see that, you should not edit the copied method! Instead, edit the original method it's copied from, and run make fixup to synchronize that across all the copies. Be sure you installed the development dependencies withpip install -e ".[dev]"
, as described in the contributor guidelines above, to ensure that the code quality tools inmake fixup
can run.Models updated:
The text was updated successfully, but these errors were encountered: