Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pt: avoid set_default_dtype in tests #3303

Merged

Conversation

njzjz
Copy link
Member

@njzjz njzjz commented Feb 19, 2024

Why it is bad:
The default dtype in the producution code is still float32. When setting it to float64 during tests, the actual producation behavior may not be properly tested. (for example, if the production code misses dtype for pt.zeros, we are not able to find it in these tests)

Why it is bad:
The default dtype in the producution code is still float32. When setting it to float64 during tests, the actual producation behavior may not be properly tested. (for example, if the production code misses dtype for pt.zeros, we are not able to find it in these tests)

Signed-off-by: Jinzhe Zeng <[email protected]>
@njzjz njzjz marked this pull request as ready for review February 19, 2024 21:57
Copy link

codecov bot commented Feb 19, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (ab35468) 75.05% compared to head (622ddff) 75.05%.

Additional details and impacted files
@@           Coverage Diff           @@
##            devel    #3303   +/-   ##
=======================================
  Coverage   75.05%   75.05%           
=======================================
  Files         396      396           
  Lines       33895    33895           
  Branches     1604     1604           
=======================================
  Hits        25441    25441           
  Misses       7593     7593           
  Partials      861      861           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@wanghan-iapcm
Copy link
Collaborator

Why it is bad: The default dtype in the producution code is still float32. When setting it to float64 during tests, the actual producation behavior may not be properly tested. (for example, if the production code misses dtype for pt.zeros, we are not able to find it in these tests)

pt uses float32 only when DP_INTERFACE_PREC is set to low ?

@njzjz
Copy link
Member Author

njzjz commented Feb 20, 2024

pt uses float32 only when DP_INTERFACE_PREC is set to low ?

When DP_INTERFACE_PREC is low, GLOBAL_PT_FLOAT_PRECISION is float32. Otherwise, GLOBAL_PT_FLOAT_PRECISION is float64. That's it.

If dtype is not set, a tensor (torch.tensor, torch.ones, torch.zeros, etc) still uses float32, no matter what GLOBAL_PT_FLOAT_PRECISION is.

@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Feb 20, 2024
Merged via the queue into deepmodeling:devel with commit ab2ed0e Feb 20, 2024
48 checks passed
@njzjz njzjz mentioned this pull request Apr 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants