You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the comparison operators defined for AbstractIrrational v.s. AbstractFloat is causing problems for GPUs:
The precision of AbstractIrrational is currently matched by invoking Float(x, RoundUp/Down) by default:
These issues shouldn't happen when a certain AbstractIrrational's conversion is defined statically by specializing Float(BigFloat).
To fix this, we need to change the behavior of the comparison operators to check whether a specialization Float(BigFloat) exist, and then try to do dynamic precision adjustment.
The text was updated successfully, but these errors were encountered:
Hi,
Currently, the comparison operators defined for
AbstractIrrational
v.s.AbstractFloat
is causing problems for GPUs:The precision of
AbstractIrrational
is currently matched by invokingFloat(x, RoundUp/Down)
by default:julia/base/irrationals.jl
Lines 93 to 104 in 6e2e6d0
This internally calls
setprecision(BigFloat, p)
:julia/base/irrationals.jl
Lines 68 to 72 in 6e2e6d0
And this depends on
libmpfr
, which is not supported on the GPU.This implementation has been causing problems downstream
These issues shouldn't happen when a certain
AbstractIrrational
's conversion is defined statically by specializingFloat(BigFloat)
.To fix this, we need to change the behavior of the comparison operators to check whether a specialization
Float(BigFloat)
exist, and then try to do dynamic precision adjustment.The text was updated successfully, but these errors were encountered: