Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wave vs Chart Development Tips and Tricks #59

Closed
sys-ops opened this issue Jul 15, 2019 · 4 comments
Closed

wave vs Chart Development Tips and Tricks #59

sys-ops opened this issue Jul 15, 2019 · 4 comments
Labels
question Further information is requested

Comments

@sys-ops
Copy link

sys-ops commented Jul 15, 2019

Hi,

I just have tested wave with nginx-ingress and it worked for me. However, on the following page I found a way to manage it without an extra POD:
https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change

How does wave differ from using checksum/config and sha256sum?

Regards

@KlavsKlavsen
Copy link

Its two different ways of doing things. Both methods depend on k8s to automaticly "roll pod" - when an annotation on it is updated.
When helm updates it (generating new checksums of configs) - it can ONLY look at config it can see - where it runs. It cannot get them (without a hack) from the cluster.
This means - if you have a service that depends on config maps for other services - a helm approach won't help you - as those configmaps won't be local to your service.

This controller parses the actual deployment and generates hash'es of configmaps and secrets used - to automaticly notice if any of them change.
As I understand it though - it can be a quite heavy job, as it polls all the time - there is no event to trigger when it runs ?

@JoelSpeed
Copy link
Collaborator

As I understand it though - it can be a quite heavy job, as it polls all the time - there is no event to trigger when it runs ?

Wave is actually event driven! We use what is known as an Informer as a source of events that cause Wave to reconcile.

When Wave starts, it lists and then watches for all deployments, daemonsets, statefulsets, configmaps and secrets. These are all then stored in a cache. The watch part of this causes Kubernetes to stream events related to these types to the controller, allowing it to keep its cache in sync (though it does periodically do a full resync). Every one of these streamed events is filtered by Wave and causes Deployments/Daemonsets/Statefulsets to be queued for reconcile.

What this means is that any time a configmap or secret is modified, all of the Deployments/StatefulSets/DaemonSets that mount them are reconciled by Wave. There is a lot of filtering in place to make sure we perform the reconciliation as little as possible.

This is the same way all controllers in mainline Kubernetes work, so having Wave installed should be no heavier than any of the other Kubernetes controllers (including the 35 in controller-manager)

@KlavsKlavsen
Copy link

@JoelSpeed Thank you for your swift response.. I must admit that it worries me when I see this: https://github.com/pusher/wave/blame/master/README.md#L118 - and from what you say - it should not be necessary to set such a sync interval? (since it reacts on events - and so will notice "almost immediately" - if a configmap or secret is updated)

@JoelSpeed
Copy link
Collaborator

We have potentially been a bit overly cautious on our recommendation of every 5 minutes, controller-runtime on which Wave is based set the value to 10 hours by default.

The reason these syncs are necessary is because events are not guaranteed. Like with any distributed system, there is no guarantee that every packet reaches the destination and as such, some events could be missed. Imagine missing an event that updates a configmap, and then having nothing else disturbed (so no more events, so no reconcile), and then only resycning when the controller restarts, you'd be a bit disappointed by the project right? There's no way for us to guarantee we receive every event without constant polling which would put far higher load on the API, so this is the best we have.

That said, I wouldn't worry too much about the load that Wave puts on the API. It performs 5 list calls when it resyncs. Kubernetes is designed to respond to thousands of requests per second, 5 should not cause it much issue 😉

In comparison to most controllers Wave is actually comparatively lightweight. Having just checked our production clusters, we sit at about 2m CPU and 40Mi memory constantly (in terms of Kubernetes resource values).

Having checked the audit logs for our production clusters, Wave is currently averaging 2.67 calls to the API per minute over the last 24 hours and excluding leader election.

@JoelSpeed JoelSpeed added the question Further information is requested label Oct 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants