-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: support PreserveCounts for resource nomad_job #420
Comments
Thanks for the suggestion @Jamesits! I try to quickly add this, but unfortunately it requires a bit more work that just adding a new flag. The first problem is that the job plan endpoint does not support But even with that change implemented, I suspect the provider itself will need changes. When computing the diff, the provider compares the value in Nomad with the jobspec directly:
I will try to get to these when I have some extra time. |
Any thoughts on a workaround in the meantime? |
@lgfa29 hey! would love this equally. our biggest concern is that during job updates, when using nomad-autoscaler - not having preserve jobs leads to a situation where we are scalling-down jobs simply because of the job update itself. to echo @lattwood 's point, do you have any workarounds that you could suggest in the meantime? Additionally, we do not include a Sample stanza: job "api_server_${template_job_name}" {
datacenters = ["${template_datacenter}"]
region = "${template_region}"
spread {
attribute = "$${node.datacenter}"
}
group "server" {
scaling {
enabled = true
min = ${template_min_scaling_size}
max = ${template_max_scaling_size}
policy {
cooldown = "3m"
evaluation_interval = "1m"
check "avg_cpu" {
source = "nomad-apm"
query = "avg_cpu-allocated"
query_window = "3m"
strategy "target-value" {
# Test value, to force the autoscaler for this issue ^^
target = 1
}
}
check "avg_memory" {
source = "nomad-apm"
query = "avg_memory-allocated"
query_window = "3m"
strategy "target-value" {
# Test value, to force the autoscaler for this issue ^^
target = 1
}
}
}
}
... |
Apologies for the delay in getting back to you, but I no longer work at HashiCorp and I wasn't able to solve this issue before I left. As a workaround, I haven't tested it myself, but I wonder if the |
@lgfa29 no worries, thanks for getting back to us on this count, pun unintended |
Currently when I deploy Nomad jobs with
scaling {}
configuration, the new job will automatically be scaled tocount
(which might be a very small value). This makes rolling upgrade of a busy job very dangerous.Is it possible that we support the PreserveCounts argument during a job deployment, so we can make Terraform-based job deployment less painful?
(Related: hashicorp/nomad#9839 hashicorp/nomad#9843)
The text was updated successfully, but these errors were encountered: