-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the results #2
Comments
I have reviewed your previous email and forgot because I was too busy. "Just Count" and other settings exist on the baseline model, which you may not have noticed. Its default settings are basically close to the optimal settings. The key to achieving the best training result is to train for a long time. If you train for a day, his training loss may only drop to 0.5%. But if you want a good result, you need to lower it to 0.05% or even 0.01%. Then the video it generates may not be strange, and you may need to train for up to three days instead of a day. You can see from their previous work that their best result was a Rouge score of only 50, which indicates that they have a lot of room for improvement, and it is not surprising that they made some mistakes. If you want to make changes to this model to publish your own paper, I suggest seeking some visualization methods, such as Python's built-in library for detecting the time spent by each method, to see where the model is spending too much time, and then improve the efficiency of the model. This should make the results better. |
@khannabeela |
@SignDiff Thank you for all the information you gave me. It helped me a lot. Yes, I was actually concerned about the batch size, and also some other configurations that are not working for me. So, I changed it. The model seems to be working fine now. |
As long as it's working right now. On some machines, the model may end up training too quickly, and you can increase the value of how many steps it validates. |
@SignDiff , Thank you for your work.
In your paper, you have compared your results with the model in "Progressive Transformers for End-to-End Sign
Language Production". If possible, could you please share the configuration information of this model for How2Sign dataset?
The text was updated successfully, but these errors were encountered: