forked from jonescompneurolab/hnn-core
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: refactor core/thread logic for mpibackend #2
Closed
asoplata
wants to merge
10
commits into
brown-ccv:gui-mpi-available-cores
from
asoplata:gui-mpi-available-cores
Closed
feat: refactor core/thread logic for mpibackend #2
asoplata
wants to merge
10
commits into
brown-ccv:gui-mpi-available-cores
from
asoplata:gui-mpi-available-cores
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…based on the operation system
… attribute where relevant.
…d on start-up Previously the sub-options were hidden by default and only displayed when the backend dropdown was changed. This hid the number of cores option for the default joblib backend on start-up.
This takes George's old GUI-specific `_available_cores()` method, moves it, and greatly expands it to include updates to the logic about cores and hardware-threading which was previously inside `MPIBackend.__init__()`. This was necessary due to the number of common but different outcomes based on platform, architecture, hardware-threading support, and user choice. These changes do not involve very many lines of code, but a good amount of thought and testing has gone into them. Importantly, these `MPIBackend` API changes are backwards-compatible, and no changes to current usage code are needed. I suggest you read the long comments in `parallel_backends.py::_determine_cores_hwthreading()` outlining how each variation is handled. Previously, if the user did not provide the number of MPI Processes they wanted to use, `MPIBackend` assumed that the number of detected "logical" cores would suffice. As George previously showed, this does not work for HPC environments like on OSCAR, where the only true number of cores that we are allowed to use is found by `psutil.Process().cpu_affinity()`, the "affinity" core number. There is a third type of number of cores besides "logical" and "affinity" which is important: "physical". However, there was an additional problem here that was still unaddressed: hardware-threading. Different platforms and situations report different numbers of logical, affinity, and physical CPU cores. One of the factors that affects this is if there is hardware-threading present on the machine, such as Intel Hyperthreading. In the case of an example Linux laptop having an Intel chip with Hyperthreading, the logical and physical core numbers will report different values with respect to each other: logical includes Hyperthreads (e.g. `psutil.cpu_count(logical=True)` reports 8 cores), but physical does not (e.g. `psutil.cpu_count(logical=False)` reports 4 cores). If we tell MPI to use 8 cores ("logical"), then we ALSO need to tell it to also enable the hardware-threading option. However, if the user does not want to enable hardware-threading, then we need to make this an option, tell MPI to use 4 cores ("physical"), and tell MPI to not use the hardware-threading option. The "affinity" core number makes things even more complicated, since in the Linux laptop example, it is equal to the logical core number. However, on OSCAR, it is very different than the logical core number, and on Macos, it is not present at all. In `_determine_cores_hwthreading()`, if you read the lengthy comments, I have thought through each common scenario, and I believe resolved what to do for each, with respect to the number of cores to use and whether or not to use hardware-threading. These scenarios include: the user choosing to use hardware-threading (default) or not, across Macos variations with and without hardware-threading, Linux local computer variations with and without hardware-threading, and Linux HPC (e.g. OSCAR) variations which appear to never support hardware-threading. In the Windows case, due to both jonescompneurolab#589 and the currently-untested MPI integration on Windows, I always report the machine as not having hardware-threading. Additionally, previously, if the user did provide a number for MPI Processes, `MPIBackend` used some "heuristics" to decide whether to use MPI oversubscription and/or hardware-threading, but the user could not override these heuristics. Now, when a user instantiates an `MPIBackend` with `__init__()` and uses the defaults, hardware-threading is detected more robustly and enabled by default, and oversubscription is enabled based on its own heuristics; this is the case when the new arguments `hwthreading` and `oversubscribe` are set to their default value of `None`. However, if the user knows what they're doing, they can also pass either `True` or `False` to either of these options to force them on or off. Furthermore, in the case of `hwthreading`, if the user indicates they do not want to use it, then `_determine_cores_hwthreading()` correctly returns the number of NON-hardware-threaded cores for MPI's use, instead of the core number including hardware-threads. I have also modified and expanded the appropriate testing to compensate for these changes. Note that this does NOT change the default number of jobs to use for the GUI if MPI is detected. Such a change breaks the current `test_gui.py` testing: see jonescompneurolab#960 jonescompneurolab#960
I originally rebased my additions on top of your branch, on top of the latest updates to upstream's master. This was required to replay your commits since there's some very minor changes that needed editing during the rebase. We can definitely choose to do any merges in a different way, however. |
6159e1f
to
8f929ca
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Reposting my commit message here:
This takes George's old GUI-specific
_available_cores()
method, movesit, and greatly expands it to include updates to the logic about cores
and hardware-threading which was previously inside
MPIBackend.__init__()
. This was necessary due to the number of commonbut different outcomes based on platform, architecture,
hardware-threading support, and user choice. These changes do not
involve very many lines of code, but a good amount of thought and
testing has gone into them. Importantly, these
MPIBackend
API changesare backwards-compatible, and no changes to current usage code are
needed. I suggest you read the long comments in
parallel_backends.py::_determine_cores_hwthreading()
outlining howeach variation is handled.
Previously, if the user did not provide the number of MPI Processes they
wanted to use,
MPIBackend
assumed that the number of detected"logical" cores would suffice. As George previously showed, this does
not work for HPC environments like on OSCAR, where the only true number
of cores that we are allowed to use is found by
psutil.Process().cpu_affinity()
, the "affinity" core number. There isa third type of number of cores besides "logical" and "affinity" which
is important: "physical". However, there was an additional problem here
that was still unaddressed: hardware-threading. Different platforms and
situations report different numbers of logical, affinity, and physical
CPU cores. One of the factors that affects this is if there is
hardware-threading present on the machine, such as Intel
Hyperthreading. In the case of an example Linux laptop having an Intel
chip with Hyperthreading, the logical and physical core numbers will
report different values with respect to each other: logical includes
Hyperthreads
(e.g.
psutil.cpu_count(logical=True)
reports 8 cores), but physicaldoes not
(e.g.
psutil.cpu_count(logical=False)
reports 4 cores). If we tell MPIto use 8 cores ("logical"), then we ALSO need to tell it to also enable
the hardware-threading option. However, if the user does not want to
enable hardware-threading, then we need to make this an option, tell MPI
to use 4 cores
("physical"), and tell MPI to not use the hardware-threading option. The
"affinity" core number makes things even more complicated, since in the
Linux laptop example, it is equal to the logical core number. However,
on OSCAR, it is very different than the logical core number, and on
Macos, it is not present at all.
In
_determine_cores_hwthreading()
, if you read the lengthy comments, Ihave thought through each common scenario, and I believe resolved what
to do for each, with respect to the number of cores to use and whether
or not to use hardware-threading. These scenarios include: the user
choosing to use hardware-threading (default) or not, across Macos
variations with and without hardware-threading, Linux local computer
variations with and without hardware-threading, and Linux
HPC (e.g. OSCAR) variations which appear to never support
hardware-threading. In the Windows case, due to both jonescompneurolab#589 and the
currently-untested MPI integration on Windows, I always report the
machine as not having hardware-threading.
Additionally, previously, if the user did provide a number for MPI
Processes,
MPIBackend
used some "heuristics" to decide whether to useMPI oversubscription and/or hardware-threading, but the user could not
override these heuristics. Now, when a user instantiates an
MPIBackend
with
__init__()
and uses the defaults, hardware-threading is detectedmore robustly and enabled by default, and oversubscription is enabled
based on its own heuristics; this is the case when the new arguments
hwthreading
andoversubscribe
are set to their default value ofNone
. However, if the user knows what they're doing, they can alsopass either
True
orFalse
to either of these options to force themon or off. Furthermore, in the case of
hwthreading
, if the userindicates they do not want to use it, then
_determine_cores_hwthreading()
correctly returns the number ofNON-hardware-threaded cores for MPI's use, instead of the core number
including hardware-threads.
I have also modified and expanded the appropriate testing to compensate
for these changes.
Note that this does NOT change the default number of jobs to use for the
GUI if MPI is detected. Such a change breaks the current
test_gui.py
testing: see jonescompneurolab#960
jonescompneurolab#960