-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry OMP multithreading in cudacpp (and prototype custom multithreading, and compare to MP) - suboptimal results in ggttgg (Dec 2022) #575
Comments
The idea is essentially the following:
For instance on a 4-core machine with AVX2
|
In particular this hould be tested against pmpe04 or another node with 30+ cores. See the previous suboptimal results in #196 |
Rather than open a new issue, I add a few ideas here. OMP is one solution for MT in cudacpp. But custom multithreading is another possibility. What I am thinking of is the following
|
I reenabled OMP MT and I did a few tests. It works, but I still get suboptimal results. I will followup here with ggttgg on the previous results in #196 for eemumu (and I will close that ticket). My observations
Things to do
Anyway, below are the numbers. On pmpe04 (16 physical cores with avx2, 2xHT so 32 maximum threads). There is no cuda, cso built essentially with CUDA_HOME=none. These are not systematic tests, they ar emore or less the first numbers I got... Without SIMD, 16k events
Without SIMD, more events
With AVX2 SIMD, 16k events
With AVX2 SIMD, more events
Note also that 'top' shows a varying load on the system. in some of the fastest tests it was 100% (3200 load) at points but then falling temporarely to 70%. In other tests it was showing 92% constant... So in summary,
Aagain, all this should be compared to several independent processes single-threaded (and or eventually to home-made MT) |
I will create and merge a MR NB One thing that I have not done is to reenable OMP tests in tmad/tput scripts. Maybe something for @Jooorgen to test in your infrastructure? |
I made a few tests manually, see logs in madgraph5#575 NB One thing that I have not done is to reenable OMP tests in tmad/tput scripts. You need very large number of events and long tests to get meaningful results
…6 sa I made a few tests manually, see logs in madgraph5#575 NB One thing that I have not done is to reenable OMP tests in tmad/tput scripts. You need very large number of events and long tests to get meaningful results
I have reenabled this in gcc, but failed in icpx and clang, see #578 Anyway this one stays open for more performance studies |
…adgraph5#575 (but clang fails build)
…enabling on gcc madgraph5#575 (not yet on icpx clang madgraph5#578)
…graph5#575 (not yet on icpx clang madgraph5#578)
With the changes for the random choice of helicity (#403, MR #570 and especially #415), the OMP multithreading loop has moved inside cudacpp. It is now in a place where maybe it could work better out of the box.
Note in fact that also Fortran OMP is now quite good (see #561), so I would expect something similar in cudacpp.
While doing the code move I disabled (commented out) the OMP pragmas. They should be reenabled and tested..
madgraph4gpu/epochX/cudacpp/gg_tt.mad/SubProcesses/P1_gg_ttx/CPPProcess.cc
Line 878 in 3780502
The text was updated successfully, but these errors were encountered: