-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Horizontal Scaling RC to scale another controller based on number of cores and nodes #2
Conversation
cc/ @piosz @fgrzadkowski @jszczepkowski @mwielgus you might be interested in this (I suggest going to the commit view and just looking at the non-Godep commit) @girishkalele can you say something about when people should use this vs. the Horizontal Pod Autoscaling feature? |
Oh and @girishkalele it might be useful to copy the README from the repo into this PR thread, since it's not completely obvious that that's where to look to understand what is the goal of this PR. |
Horizontal Self Scaler containerThis container image watches over the number of schedulable cores and nodes in the cluster and resizes the number of replicas in the required controller.
Implementation DetailsThe code in this module is a Kubernetes Golang API client that, using the default service account credentials available to Golang clients running inside pods, it connects to the API server and polls for the number of nodes and cores in the cluster. Calculation of number of replicasThe desired number of replicas is computed by lookup up the number of cores using the step ladder function. Configmap controlling parametersThe ConfigMap provides the configuration parameters, allowing on-the-fly changes without rebuilding or restarting the scaler containers/pods. Example rc fileThis example-rc.yaml is an example Replication Controller where the nannies in all pods watch and resize the RC replicas. |
This is the horizontal scaling version of the vertical addon resizer by @Q-Lee Prior discussion here: kubernetes-retired/contrib#1427 |
The Horizontal Pod Autoscaler is a top-level Kubernetes API resource. It is a true closed loop autoscaler which monitors CPU utilization of the pods and scales the number of replicas automatically. It requires the CPU resources to be defined for all containers in the target pods and also requires heapster to be running to provide CPU utilization metrics. This horizontal self scaler is a DYI container (because it is not a Kubernetes API resource) that provides a simple control loop that watches the cluster size and scales the target controller. The actual CPU or memory utilization of the target controller pods is not an input to the control loop, the sole inputs are number of schedulable cores and nodes in the cluster. The configmap provides the operator with the ability to tune the replica scaling explicitly. |
|
||
# Rules for building the real image for deployment to gcr.io | ||
|
||
deps: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need this rule?
if we move this to its own RC, is "self-scaler" still a valid name? We can rename the repo... why is everything in an "autoscaler" subdir, rather than the root? |
Also, can I beg to derive your Makefile from kubernetes/build/pause/Makefile On Thu, Aug 4, 2016 at 9:54 PM, Tim Hockin [email protected] wrote:
|
# Implementation Details | ||
|
||
The code in this module is a Kubernetes Golang API client that, using the default service account credentials | ||
available to Golang clients running inside pods, it connects to the API server and polls for the number of nodes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems that you only use the number of cores, not the number of nodes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, @thockin commented above about the same - we need two scale maps, one for cores and one for nodes, and we lookup both maps and choose the greater number. The user may choose to omit one map (and just scale only by number of cores or nodes). I am changing it to accept the two scale maps.
@girishkalele this functionality seems to be a perfect fit for custom metric case in HPA like I can see many benefits of having this feature in HPA:
Any reasoning for having this as a separate project? |
I strongly agree with @piosz. We should reuse the existing scaling infrastructure and have a consolidated pod scaling solution. The only differences between this project and HPA are that it uses a slightly different metric to calculate desired replica count and runs in a separate pod (instead of being a controller) what makes it vulnerable to scheduling problems. |
cc @wojtek-t |
The intent of this was to create a nanny container similar to the addon-resizer container used by fluentd for DNS horizontal scaling. HPA+Heapster was too heavy in resource utilization for a simple scaler. This is also a nice base template for folks doing DYI scalers of their own, scaling along various metrics. I didn't know about the ability of the HPA to scale using custom metrics - can it do this already today ? |
The reason why addon-resizer is a separate container is that there is no Vertical Autoscaler in Kubernetes yet, but we have a production ready solution for horizontal scaling. While I'm ok with having this feature in the shape you propose as a temporary hack for 1.4, I think it should eventually become a part of HPA. I don't think encouraging users to write their own scalers is the right approach - we should provide powerful api to scale based on custom metrics instead. There is no custom metrics in HPA yet, but the problem you're trying to solve is a good reason to increase priority of it. Also see kubernetes/kubernetes#28628 |
@MrHohn This is the WIP on the cluster-proportional-autoscaler. The skydns yaml template changes that add this to the kube-dns pod are here kubernetes/kubernetes#32019 |
Split out Godeps commit from real changes