-
-
Notifications
You must be signed in to change notification settings - Fork 985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix tests float16 module losses #1809
fix tests float16 module losses #1809
Conversation
Progress Fixed all errors when calling pytest test/losses -v --dtype=float16 --device=cuda, most caused by using float type in assert_close instead of torch.float. I continue to think about using BaseTester and the mechanism that chooses the tolerance. |
Hi @edgarriba sorry for late, I was sick( I researched different cases of errors for float16 tensors and there are some variations:
I made sure all tests pass. But some tests required lower tolerance value, such as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an amazing improvement. I'm missing just a small thing here to somehow mark the tests we want to include in the ci. For example, to already include the testing of the functionality that already supports for half precision. Possibly imporove some logics here: https://github.com/kornia/kornia/blob/master/conftest.py
We can merge this PR and improve later
Co-authored-by: Edgar Riba <[email protected]>
Co-authored-by: Edgar Riba <[email protected]>
Co-authored-by: Edgar Riba <[email protected]>
@edgarriba I am very pleased to hear your feedback! :) |
@edgarriba I've done in this PR all I wanted) Now it could be merged from my side. I couldn't get an idea of how to ark all ready tests, but I added functionality to skip all tests, which not working for half-precision. I believe we can run CI tests by modules. I have convinced that all modules I have changed working with half-precision. I think I can fix all the rest of modules in my following PRs :) Augmentation
Color
Enhance
Losses
|
I see failings in checks :( |
Head branch was pushed to by a user without write access
@edgarriba Could you approve running workflows for tests? |
Changes
Fixes #1805
Type of change
Checklist
Progress
Fixed all errors when calling
pytest test/losses -v --dtype=float16 --device=cuda
, most caused by using float type in assert_close instead of torch.float.I continue to think about using BaseTester and the mechanism that chooses the tolerance.