-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CMake target for each accelerator #1019
Comments
What if I don't have CUDA installed at first but install it after alpaka? Do we want to force a reinstall? |
Than you can not use CUDA. If you install MPI before you install CUDA you can not have a CUDA-aware MPI.
To switch the CUDA version after installing alpaka sound good but will avoid that you are on a reproducible environment. |
You sill have the possibility to use alpaka via |
But CUDA is a build dependency for CUDA-aware MPI, right? This isn't true for alpaka since we are header-only. I'm also not convinced that there is a problem with our current approach. Right now we check for dependencies at build / installation / |
With the current approche the admin who provides alpaka as module has hard times to guarantee that alpaka works because what 'make test' checks is not what the iser later uses as environment.
We should discuss it with more alpaka devs in a face to face meeting.
It is hard for me to explain here all because I need maybe intruduce how HPC system handle there software and how they IMO should do it without moving software/environment tests to there users.
|
A possible way would be to keep the live search in cmake after the installation but only allow backends those was activated and therefore could be tested during the install and disable all other. |
You both have great points and I think keeping the installed version as @j-stephan describes agnostic is quite good, especially for other package managers than Spack that live with a more "runtime addon" philosophy (debian, conda-forge et al.). The point that @psychocoderHPC raises is really good and I would say we could address this with:
|
I agree with @psychocoderHPC that we should not activate all possible backends by default. It is not the usual workflow and it is a source of unnecessary bugs (e.g. In my case, OpenMP and the nvcc together caused a problem and did not use OpenMP in my code). |
@SimeonEhrig has unconvered a problem with the current approach while working on vikunja. Currently we don't use CMake's
I believe we could fix this by using Unless there are any objections I'd start working on that in the coming days. |
In the video conference with HZB today, they explained that they want to deploy a binary with different, activated backends. At runtime, the appropriate implementation should then be selected depending on the available hardware. |
This is very similar to what how we use alpaka in the CMS software (without CMake, though). |
I know. Therefore I told to the HZB, that is possible ;-) Thanks for the offer. |
Based on this I suggest that we add CMake target for each accelerator backend.
Currently alpaka CMake behaves to optimal fits the CI workflow and is activating all available backends.
Backends should be select able during the install of alpaka and store the dependencies including there locations. After alpaka is installed e.g. for CUDA than there is no requirement to search for a CUDA library again because dependencies must be checked during the install or when
add_subdirectory
is called.The text was updated successfully, but these errors were encountered: