You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While comparing APFloat against berkeley-softfloat-3e I found a discrepancy in fusedMultiplyAdd in a particular corner-case:
#include<cmath>
#include<iostream>
#include<bit>
#include<cstdint>intmain()
{
static_assert(sizeof(float) == 4);
auto a = 0.24999998f;
auto b = 2.3509885e-38f;
auto c = -1e-45f;
auto d = std::fmaf(a, b, c);
// Clang with optimizations folds d to 3ffffe, without optimizations 3fffff.
std::cout << std::hex << std::bit_cast<uint32_t>(d) << "\n";
}
Reproduction available at compiler explorer. This occurs for NearestTiesToEven and NearestTiesToAway rounding modes. Originally this issue was discovered by linking against LLVMSupport and using APFloat directly, but it also affects constant folding with default rounding mode.
GCC and native x86 FPU seem to agree with clang without optimizations.
This is likely a case of incorrect rounding and is unrelated to #63895.
While comparing APFloat against berkeley-softfloat-3e I found a discrepancy in
fusedMultiplyAdd
in a particular corner-case:Reproduction available at compiler explorer. This occurs for NearestTiesToEven and NearestTiesToAway rounding modes. Originally this issue was discovered by linking against LLVMSupport and using APFloat directly, but it also affects constant folding with default rounding mode.
GCC and native x86 FPU seem to agree with clang without optimizations.
This is likely a case of incorrect rounding and is unrelated to #63895.
(cc @eddyb @beetrees )
The text was updated successfully, but these errors were encountered: