Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Promote small integer types to single rather than double precision #278

Merged
merged 4 commits into from
May 20, 2022

Conversation

grlee77
Copy link
Contributor

@grlee77 grlee77 commented May 17, 2022

This is a performance-related PR that will result in casting of small integer dtypes (8 and 16-bit) to 32-bit floats rather than the current use of 64-bit floats. The current behavior is consistent with scikit-image, although I have raised an issue there to potentially change to the behavior proposed here: scikit-image/scikit-image#6310.

The changes required are quite small with the key one being the change to the new_float_type dict that gets used when promoting dtypes to a floating point type. Most other changes are in the tests where we sometimes have to bump up the tolerance when computations that were previously in double precision now get run in single precision instead.

I marked this as breaking because the output of various floating-point functions for integer inputs may now be cp.float32 in cases where it was previously cp.float64.

grlee77 added 4 commits May 17, 2022 14:16
float32 operations are much more efficient on the GPU

still need to fix a handfull of test failures
…ted_float_dtype

convert bool->float32 as well

force some internal computations in match_template to use float64 for accuracy
uint8 image properties now computed in float32, so accuracy is reduced
@grlee77 grlee77 added breaking Introduces a breaking change performance Performance improvement labels May 17, 2022
@grlee77 grlee77 requested a review from a team as a code owner May 17, 2022 18:32
@grlee77 grlee77 changed the title Promote small integer types to single rather than double precision for floating point computations Promote small integer types to single rather than double precision May 17, 2022
@grlee77 grlee77 added the improvement Improves an existing functionality label May 17, 2022
@jakirkham
Copy link
Member

Just curious, are there any cases where half-precision floats would makes sense for us?

@gigony gigony added this to the v22.06.00 milestone May 17, 2022
Copy link
Contributor

@gigony gigony left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@jakirkham
Copy link
Member

@gpucibot merge

@rapids-bot rapids-bot bot merged commit caeaa9d into rapidsai:branch-22.06 May 20, 2022
@jakirkham
Copy link
Member

Thanks Greg! 🙏

rapids-bot bot pushed a commit that referenced this pull request Jul 27, 2022
Update of #278 for branch-22.08 with minor additional fixes. I have tested this locally and it seems to be operating as expected.

Authors:
  - Gregory Lee (https://github.com/grlee77)
  - https://github.com/aasthajh

Approvers:
  - https://github.com/jakirkham

URL: #322
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking Introduces a breaking change improvement Improves an existing functionality performance Performance improvement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants