Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluating bitrounded results with metrics? #49

Closed
2 tasks
observingClouds opened this issue Apr 11, 2022 · 3 comments
Closed
2 tasks

Evaluating bitrounded results with metrics? #49

observingClouds opened this issue Apr 11, 2022 · 3 comments
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@observingClouds
Copy link
Owner

observingClouds commented Apr 11, 2022

To ease the decision on specific keepbits for a simulation, it would be great to include some metrics into this package that can quantify the differences between bit-rounded and original values. The simplest being:

  • mean
  • standard deviation

These could also be enhanced by plotting routines that loop over possible keepbits.

@aaronspring
Copy link
Collaborator

do you mean temporal metrics, eg. temporal mean?

I would go for normalized error (ori - bitrounded)/ori and error normalized by temp std (ori - bitrounded)/ori.std("time").

we could use metrics from https://xskillscore.readthedocs.io/en/stable/api.html#distance-metrics

In general, I am not sure whether we can provide a general solution here. for simple metrics such as mean and std, the user can easily do this in xr. maybe what I lack is a short API proposal like compare(ori, rounded, metrics=[])->"xr.plot(col="metric",row='?")" I think this will be hard to generalize

see also milankl/BitInformation.jl#25 (comment)

I'm not sure I managed to get my point across: If you approach the compression of fgco2 with the prior knowledge that your error should not be higher than 1% then you don't need the bitinformation algorithm. You can directly infer that for a max 1% error you'll need 6 mantissa bits

@aaronspring
Copy link
Collaborator

lets definately have a notebook how to deal with this. or even integrate just one plot into the quick-start

@aaronspring aaronspring added documentation Improvements or additions to documentation question Further information is requested labels Apr 12, 2022
@milankl
Copy link
Collaborator

milankl commented Apr 25, 2022

Note that both mean and standard deviation are not particularly insightful metrics for assessing bitrounding. Following the IEEE-754 standard, bitrounding is bias-free with respect to 0 and also for absolute values. Having said that, this is true for any data, as long as the assumption of a uniform distribution within ulp is true. If, for example, for some weird reason all your data is just larger than ulp/2 (ulp=unit in the last place, the distance between two adjacent representable numbers after rounding) there'll be a bias as every value is rounded up. But in practice, a bias-free rounding mode will not affect the mean.

A similar argument holds for standard deviation: As long as the standard deviation is much larger than ulp then rounding will not affect it.

@aaronspring aaronspring changed the title Include metrics for bit rounding evaluation Evaluating bitrounded results with metrics? Apr 25, 2022
Repository owner locked and limited conversation to collaborators Apr 25, 2022
@aaronspring aaronspring converted this issue into discussion #74 Apr 25, 2022

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants