-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Job number calculation should take into account the amount of swap space #72
Comments
The code calculating this is pretty easy to find in the PR you link. Tweak it all you like. If someone files an issue because the build fails, you get to fix it :-) |
Warning: as you have personally experienced earlier, if you tweak the formula to use too many parallel jobs, the build gets dramatically slower, or fails because of OOM killer in kernel. Not fun to debug. |
Sorry, one more answer that I just remembered now. With the current code, you can always specify Personally, I strongly recommend that whatever automated default methods are used to pick the number of parallel jobs for build, that it is far better if they sometimes err by using 1 less parallel job than the maximum possible, vs. if they sometimes lead to performance death by too much swapping, or failure due to OOM killer in the kernel. For expert users who want 1 more parallel job than the default, the |
Yes, I know about I'll make the changes -- just wanted to make sure you are OK with them. |
I am OK with them if they work, and not OK with them if they cause OOM killer or massive swapping performance degradation for any combination of # of CPUs and amount of RAM and other possible system configurations that anyone might try in the future, which is NOT only 4 vCPU + 16GB RAM + amount of swap you personally configure on your system. You use the word |
Another option is to remove the unity builds or to tune them so that they use a little less memory. At least on AWS the ratio of RAM to CPUs are 4GB/cpu, which is cutting it really close to what the tool assumes. |
Removing the unity builds would probably increase the build time by some noticeable amount of time, but probably not a large percent for the overall install. It should significantly reduce the amount of memory required per parallel job that is run. Before my changes in #38, as far as I know p4studio made no assumptions about the memory required -- it just used |
The builds became a lot slower on my system after #38 , because now it calculates max jobs to be 3 on my system with 4 CPUs and 16GB of memory.
This would've been the right thing to do if my system lacked swap space (indeed, the build would not even complete), but I did add the swap space and it would be nice to continue using all 4 CPUs by default (it definitely works when building the standard SDE).
The proposal is to read not only to read
MemTotal
from/proc/meminfo
, but alsoSwapTotal
and change the heuristic.Also, it would probably be better to look for
MemTotal
explicitly instead of assuming that it is always the first line in/proc/meminfo
The text was updated successfully, but these errors were encountered: