Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cli] detect number of CPU cores #554

Merged
merged 1 commit into from
Jan 18, 2018
Merged

[cli] detect number of CPU cores #554

merged 1 commit into from
Jan 18, 2018

Conversation

mikz
Copy link
Contributor

@mikz mikz commented Jan 18, 2018

  • use nproc to detect number of cores
    • nproc correctly detects limits set by docker

/cc @jmprusi

@mikz mikz force-pushed the worker-processes branch from 6f306c2 to e6157a8 Compare January 18, 2018 12:21
@mikz mikz requested review from davidor and jmprusi January 18, 2018 12:21
@@ -179,7 +179,7 @@ local function configure(cmd)
cmd:flag("-d --daemon", "Daemonize.")
cmd:option("-w --workers",
"Number of worker processes to start.",
resty_env.value('APICAST_WORKERS') or 1)
resty_env.value('APICAST_WORKERS') or Environment.default_config.worker_processes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we call tonumber on the first part of the or? I guess it would work anyway, but I think it would be clearer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But what if you want to set APICAST_WORKERS=auto ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Contributor

@davidor davidor Jan 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's not obvious that APICAST_WORKERS accepts 'auto'. People familiar with nginx or the code might guess it, but I think we should document it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

APICAST_WORKERS is currently undocumented and I think it is pretty low level and stay like that for now.

  • in production we auto detect correct number of cores when deployed by our docker image
  • development starts on 1 core

So people really should not need tuning the number of workers.

@@ -40,6 +41,16 @@ local function parse_nameservers()
end
end

local function cpus()
local cores = tonumber(resty_env.get('NUMBER_OF_CORES'))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this? Is there any reason someone would use NUMBER_OF_CORES instead of the existing APICAST_WORKERS?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, if we start accepting this ENV, we should document it https://github.com/3scale/apicast/blob/master/doc/parameters.md

Just realized that APICAST_WORKERS is not there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All software collection images use NUMBER_OF_CORES, so I thought it would make sense to follow that.

But probably it does not make sense because the SCL images override NUMBER_OF_CORES with own value.

Will remove it.

* use nproc to detect number of cores
  - nproc correctly detects limits set by docker
@mikz mikz force-pushed the worker-processes branch from e6157a8 to 0813608 Compare January 18, 2018 14:16
@mikz mikz merged commit 44f7da8 into master Jan 18, 2018
@mikz mikz deleted the worker-processes branch January 18, 2018 14:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants