Skip to content
This repository has been archived by the owner on Nov 1, 2022. It is now read-only.

Add memcached resource requests #2879

Merged
merged 1 commit into from
Mar 10, 2020
Merged

Add memcached resource requests #2879

merged 1 commit into from
Mar 10, 2020

Conversation

jmymy
Copy link
Contributor

@jmymy jmymy commented Feb 26, 2020

  • Adding in default resource limits for the helm chart to follow k8s best practices.
  • Memcached should at least have a request block by default.

@hiddeco
Copy link
Member

hiddeco commented Feb 26, 2020

I am of opinion that setting default limits, although specified by Kubernetes, does more harm than good as the OOMKill that is the result of it can cause much damage, and even result in a persistent loop of failure for our image metadata cache.

I also interpret the best practice as a user configuration best practice, and not a best practice for us as chart vendors. To me it (the best practice) would thus translate to: do not blindly apply a chart, but configure it to your needs.

@stefanprodan
Copy link
Member

Setting CPU limits is not a Kubernetes best practice...

CFS throttling is broken. Processes can be throttled too early, and quotas are sometimes not properly reset when the period expires. The fix for this mess is easy: don't use CPU limits, and run your kubelet with the --cpu-cfs-quota=false flag. This will disable CPU throttling entirely. This can cause different problems, but if you allocate pods to nodes effectively, your cluster will speed up. Anecdotally, when turning CFS throttling off, my web app's 95th percentile response time reduced by more than 50%.

https://mcrthr.com/kubernetes-cpu-limits

@jmymy
Copy link
Contributor Author

jmymy commented Feb 26, 2020

Setting CPU limits is not a Kubernetes best practice...

CFS throttling is broken. Processes can be throttled too early, and quotas are sometimes not properly reset when the period expires. The fix for this mess is easy: don't use CPU limits, and run your kubelet with the --cpu-cfs-quota=false flag. This will disable CPU throttling entirely. This can cause different problems, but if you allocate pods to nodes effectively, your cluster will speed up. Anecdotally, when turning CFS throttling off, my web app's 95th percentile response time reduced by more than 50%.

https://mcrthr.com/kubernetes-cpu-limits

Fair enough.

I am of opinion that setting default limits, although specified by Kubernetes, does more harm than good as the OOMKill that is the result of it can cause much damage, and even result in a persistent loop of failure for our image metadata cache.

I also interpret the best practice as a user configuration best practice, and not a best practice for us as chart vendors. To me it would thus translate to: do not blindly apply a chart, but configure it to your needs.

If its configuration best practices and you are specifying the default generic values to get up and running, why wouldn't it include configuration best practices?

Can we agree there should be at least the requests on for memcache? (even their helm chart has it on https://github.com/helm/charts/blob/master/stable/memcached/values.yaml)
And memory limits for the flux pod?

@hiddeco
Copy link
Member

hiddeco commented Feb 26, 2020

If its configuration best practices and you are specifying the default generic values to get up and running, why wouldn't it include configuration best practices?

Because there is no 'recommended size', the size of the cache is highly dependent on the size of your cluster, which is something we have no knowledge about at all.

Can we agree there should be at least the requests on for memcache?

Setting the default requests as in the memcached Helm chart is fine with me.

And memory limits for the Flux pod?

There is no recommended memory limit for the Flux pod either as this also depends on how you are making use of Flux. When you make use of the kustomize features for example you will see a higher memory usage than when you are just applying plain manifests, additionally it also depends on the size of your cluster; more resources in your cluster and/or git repository means a higher memory usage.

@jmymy jmymy marked this pull request as ready for review March 9, 2020 10:20
@stefanprodan stefanprodan changed the title Add default resource limits for pods Add memcached resource requests Mar 9, 2020
chart/flux/values.yaml Outdated Show resolved Hide resolved
Copy link
Member

@stefanprodan stefanprodan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Thanks @jmymy

@stefanprodan
Copy link
Member

Can you please squash the commits into a single one and rebase with master? Thanks

Signed-off-by: Jonathan Meyers <[email protected]>

uncomment by defualt memcached

Signed-off-by: Jonathan Meyers <[email protected]>

format
@stefanprodan stefanprodan merged commit f3be85d into fluxcd:master Mar 10, 2020
@jmymy jmymy deleted the add-resource-limits-helm branch March 10, 2020 11:28
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants