You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To ease the decision on specific keepbits for a simulation, it would be great to include some metrics into this package that can quantify the differences between bit-rounded and original values. The simplest being:
mean
standard deviation
These could also be enhanced by plotting routines that loop over possible keepbits.
The text was updated successfully, but these errors were encountered:
In general, I am not sure whether we can provide a general solution here. for simple metrics such as mean and std, the user can easily do this in xr. maybe what I lack is a short API proposal like compare(ori, rounded, metrics=[])->"xr.plot(col="metric",row='?")" I think this will be hard to generalize
I'm not sure I managed to get my point across: If you approach the compression of fgco2 with the prior knowledge that your error should not be higher than 1% then you don't need the bitinformation algorithm. You can directly infer that for a max 1% error you'll need 6 mantissa bits
Note that both mean and standard deviation are not particularly insightful metrics for assessing bitrounding. Following the IEEE-754 standard, bitrounding is bias-free with respect to 0 and also for absolute values. Having said that, this is true for any data, as long as the assumption of a uniform distribution within ulp is true. If, for example, for some weird reason all your data is just larger than ulp/2 (ulp=unit in the last place, the distance between two adjacent representable numbers after rounding) there'll be a bias as every value is rounded up. But in practice, a bias-free rounding mode will not affect the mean.
A similar argument holds for standard deviation: As long as the standard deviation is much larger than ulp then rounding will not affect it.
aaronspring
changed the title
Include metrics for bit rounding evaluation
Evaluating bitrounded results with metrics?
Apr 25, 2022
Repository owner
locked and limited conversation to collaborators
Apr 25, 2022
To ease the decision on specific
keepbits
for a simulation, it would be great to include some metrics into this package that can quantify the differences between bit-rounded and original values. The simplest being:These could also be enhanced by plotting routines that loop over possible
keepbits
.The text was updated successfully, but these errors were encountered: