Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some math/flonum functions have much larger error than expected #38

Open
samth opened this issue Apr 10, 2020 · 2 comments
Open

Some math/flonum functions have much larger error than expected #38

samth opened this issue Apr 10, 2020 · 2 comments

Comments

@samth
Copy link
Member

samth commented Apr 10, 2020

Random testing (see #35) discovered a number of cases where functions such as fl2+ and fl2- have substantial error (32 ulps) and functions such as fl2exp, flexp/error, and fl2expm1 have very large error (4e15 ulps, for example).

The documentation says (quoting a few different places):

For arithmetic, error is less than 8 ulps. For fl2exp and fl2expm1, error is less than 3 ulps. For flexp/error and flexpm1/error, the largest observed error is 3 ulps.

Here are some examples:

(fl2+ -6.999886226206346e+45 5.792628225761353e+29 6.9274150349997e+45 5.2105937810748895e+29)                 => -7.247119120664389267701608263587e+43
(bf+ (bf -6999886226206345021638066463653546405883019264) (bf 6927415034999701128961050381017604717751042048)) => -7.247119120664389267701608263594e+43
(fl+ -6.999886226206346e+45 6.9274150349997e+45)                                                               => -7.247119120664499299921676626019e+43
error is 32.0 ulps (relative 9.70989202827024208564e-31, absolute 7.0368744177664e+13) with precision 500                                                   
error when just using floats is 111.10414941005564 ulps (relative 1.5182891054545697262e-14, absolute 1.10032220068362431535e+30) with precision 500        
                                                                                                                                                                                                                                                                                                                      
(fl2- -6.307625868783824e-189 -7.650829856531877e-206 -6.221347182541859e-189 3.486864837367133e-205)               => -8.627868624196532783073584510346e-191
(bf- (bf "-6.307625868783823708767532620142220868851e-189") (bf "-6.221347182541858380936796775038726932571e-189")) => -8.627868624196532783073584510349e-191                                                                                                                                                           
(fl- -6.307625868783824e-189 -6.221347182541859e-189)                                                               => -8.627868624196490263595354307142e-191                                                                                                                                                           
error is 12.0 ulps (relative 3.84769743523996817173e-31, absolute 3.3197427976908391985e-221) with precision 128                                    
error when just using floats is 34.12753813992254 ulps (relative 4.92815550192302754348e-15, absolute 4.25194782302032044546e-205) with precision 128
 
(flexp/error -247.29245235328085)                         => 4.001773708963990941924920379399e-108
(bfexp (bf #e-247.2924523532808507297886535525321960449)) => 4.001773708963991419271664949602e-108
(flexp -247.29245235328085)                               => 4.001773708963991129369240222464e-108
error is 2842247977943596.5 ulps (relative 1.19283792459564869331e-16, absolute 4.77346744570203899599e-124) with precision 128
error when just using floats is 0.383283456465281 ulps (relative 7.24434827680924324972e-17, absolute 2.8990242472713825709e-124) with precision 128

(fl2exp 249.9020736185187 6.819710735232684e-15)         => 3.39696903982644185464484379844e+108
(bfexp (bf #e249.9020736185187058693838948394391650218)) => 3.396969039826441449441493860216e+108
(flexp 249.9020736185187)                                => 3.396969039826418533426604116558e+108
error is 3499404429155425.5 ulps (relative 1.19283792459564860048e-16, absolute 4.05203349938224559331e+92) with precision 128
error when just using floats is 43.944085525744526 ulps (relative 6.7460181771084045627e-15, absolute 2.29160148897436568801e+94) with precision 128

(fl2expm1 253.6199382443308 1.0330029387260226e-14)      => 1.398748649286222641619642494376e+110
(bfexpm1 (bf #e253.61993824433080713605470606162883303)) => 1.398748649286222474771598909821e+110
(flexpm1 253.6199382443308)                              => 1.39874864928620803800496605084e+110
error is 4502899460887364.0 ulps (relative 1.19283792459564859339e-16, absolute 1.66848043584554436406e+94) with precision 128
error when just using floats is 86.51298429857945 ulps (relative 1.03212014826438060794e-14, absolute 1.44367666328589814078e+96) with precision 128

These examples are taken from the samth/fltest program, which can reproduce them with the above output.

@samth
Copy link
Member Author

samth commented Apr 10, 2020

In #37 I propose changing the random testing to allow these errors in random testing, with error up to 32 in double-double arithmetic, and error up to 1e20 in the exponential functions that return double-doubles.

@samth
Copy link
Member Author

samth commented May 11, 2020

Here is an example where fl2+ has much much worse error (2048 ulps):

(fl2+ 5.754365118051034e+54 2.8307141643929314e+38 -5.753780528274009e+54 1.6423048070836051e+38)              => 5.845897770256615269301622678206e+50
(bf+ (bf #e5.754365118051034256255201488574884919768e54) (bf #e-5.753780528274008594728271326307026578003e54)) => 5.845897770256615269301622678583e+50
(fl+ 5.754365118051034e+54 -5.753780528274009e+54)                                                             => 5.845897770252142250330146142053e+50
error is 2048.0 ulps (relative 6.46246878540587163601e-29, absolute 3.77789318629571617096e+22) with precision 128
error when just using floats is 5384.200736861785 ulps (relative 7.65155181849884757515e-13, absolute 4.47301897147653615284e+38) with precision 128                                                                                                     

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant