-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Promote small integer types to single rather than double precision #278
Merged
rapids-bot
merged 4 commits into
rapidsai:branch-22.06
from
grlee77:update-int-to-float-casting
May 20, 2022
Merged
Promote small integer types to single rather than double precision #278
rapids-bot
merged 4 commits into
rapidsai:branch-22.06
from
grlee77:update-int-to-float-casting
May 20, 2022
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
float32 operations are much more efficient on the GPU still need to fix a handfull of test failures
…ted_float_dtype convert bool->float32 as well force some internal computations in match_template to use float64 for accuracy
uint8 image properties now computed in float32, so accuracy is reduced
grlee77
added
breaking
Introduces a breaking change
performance
Performance improvement
labels
May 17, 2022
grlee77
changed the title
Promote small integer types to single rather than double precision for floating point computations
Promote small integer types to single rather than double precision
May 17, 2022
Just curious, are there any cases where half-precision floats would makes sense for us? |
gigony
approved these changes
May 19, 2022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
jakirkham
approved these changes
May 20, 2022
@gpucibot merge |
Thanks Greg! 🙏 |
rapids-bot bot
pushed a commit
that referenced
this pull request
Jul 27, 2022
Update of #278 for branch-22.08 with minor additional fixes. I have tested this locally and it seems to be operating as expected. Authors: - Gregory Lee (https://github.com/grlee77) - https://github.com/aasthajh Approvers: - https://github.com/jakirkham URL: #322
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
breaking
Introduces a breaking change
improvement
Improves an existing functionality
performance
Performance improvement
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a performance-related PR that will result in casting of small integer dtypes (8 and 16-bit) to 32-bit floats rather than the current use of 64-bit floats. The current behavior is consistent with scikit-image, although I have raised an issue there to potentially change to the behavior proposed here: scikit-image/scikit-image#6310.
The changes required are quite small with the key one being the change to the
new_float_type
dict that gets used when promoting dtypes to a floating point type. Most other changes are in the tests where we sometimes have to bump up the tolerance when computations that were previously in double precision now get run in single precision instead.I marked this as breaking because the output of various floating-point functions for integer inputs may now be
cp.float32
in cases where it was previouslycp.float64
.