-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/compile: optimize overhead from CPU feature detection #36351
Comments
One option would be to allow people to hoist the check manually and then elide the checks inside the loops.
|
This becomes even more noticeable when the loop is unrolled at all, because then you don't even have another branch between the things. I did some testing to find out what the impact was, and because I was not awake enough at the time, I ended up with an implementation that had the untaken branches to call the library functions, but not the popcount instructions, and discovered that the cost of that, plus the cost of the popcount operation without the branches, is much smaller than the cost of the branches and the popcount instructions. I'm not sure why. But the net impact is that the cost of the branch before every popcount, even though it's obviously a completely predictable branch (and |
One issue with this is that |
The externally usable version of internal/cpu is golang.org/x/sys/cpu. But, regardless, we should not expect users to manually hoist a CPU-specific flag. |
This is fixed by fff7509 |
Change https://golang.org/cl/227238 mentions this issue: |
I tried CL 227238. I'm using AMD Ryzen 5 3500U.
|
In the commit message of CL 212360, I wrote: > This new intrinsic ... generates MOVB+TESTB+NE. > (It is possible that MOVBQZX+TESTQ+NE would be better.) I should have tested. MOVBQZX+TESTQ+NE does in fact appear to be better. For the benchmark in #36196, on my machine: name old time/op new time/op delta FMA-8 0.86ns ± 6% 0.70ns ± 5% -18.79% (p=0.000 n=98+97) NonFMA-8 0.61ns ± 5% 0.60ns ± 4% -0.74% (p=0.001 n=100+97) Interestingly, these are both considerably faster than the measurements I took a couple of months ago (1.4ns/2ns). It appears that CL 219131 (clearing VZEROUPPER in asyncPreempt) helped a lot. And FMA is now once again slower than NonFMA, although this change helps it regain some ground. Updates #15808 Updates #36351 Updates #36196 Change-Id: I8a326289a963b1939aaa7eaa2fab2ec536467c7d Reviewed-on: https://go-review.googlesource.com/c/go/+/227238 Run-TryBot: Josh Bleecher Snyder <[email protected]> TryBot-Result: Gobot Gobot <[email protected]> Reviewed-by: Keith Randall <[email protected]>
As investigated in #36196, the overhead of checking for hardware FMA on every iteration of a loop causes it to slow down. @josharian's CL 212360, which introduces a
HasCPUFeature
intrinsic, somewhat alleviates this overhead, but it is still non-negligble. We should look into lowering or in some cases eliminating the overhead for operations that require CPU feature detection, like population count, FMA, rounding, SSE3, etc...One method is to hoist the check outside the loop. To quote #15808 (comment)
For large loops and operations that permit > 2 implementations, the above optimization could result in inflated binaries, but it works well for smaller loops.
Another method is to set a function pointer to the preferred implementation on program initialization, so that all invocations incur an indirect function call overhead, with the benefit that the implementation wouldn't change at runtime. This would be akin to the dispatcher in GCC's function multi-versioning.
It is worth further investigating opportunities for optimization in this space.
The text was updated successfully, but these errors were encountered: