Skip to content

Commit

Permalink
Merge pull request #240 from kube-logging/pepov-patch-1
Browse files Browse the repository at this point in the history
Update scaling.md
  • Loading branch information
pepov authored Apr 26, 2024
2 parents 091fab5 + 5e47b6f commit e4b8853
Showing 1 changed file with 11 additions and 5 deletions.
16 changes: 11 additions & 5 deletions content/docs/operation/scaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,18 @@ aliases:
- /docs/one-eye/logging-operator/scaling/
---

> Note: When multiple instances send logs to the same output, the output can receive chunks of messages out of order. Some outputs tolerate this (for example, Elasticsearch), some do not, some require fine tuning (for example, Loki).
## Scaling Fluentd

In a large-scale infrastructure the logging components can get high load as well. The typical sign of this is when `fluentd` cannot handle its [buffer](../../configuration/plugins/outputs/buffer/) directory size growth for more than the configured or calculated (timekey + timekey_wait) flush interval. In this case, you can [scale the fluentd statefulset]({{< relref "../logging-infrastructure/fluentd.md#scaling" >}}).

{{< warning >}}
When scaling down Fluentd, the Logging operator does not flush the buffers before terminating the pod. Unless you have a good plan to get the data out from the detached PVC, we don't recommend scaling Fluentd down directly from the Logging operator.
The Logging Operator supports scaling a **Fluentd aggregator** statefulset up and down. Scaling statefulset pods down is challenging, because we need to take care of the underlying volumes with buffered data that hasn't been sent, but the Logging Operator supports that use case as well.

To avoid this problem, you can either write a custom readiness check to get the last pod out from the endpoints of the service, and stop the node only when its buffers are empty.
{{< /warning >}}
The details for that and how to configure an HPA is described in the following documents:
- https://github.com/kube-logging/logging-operator/blob/master/docs/volume-drainer.md
- https://github.com/kube-logging/logging-operator/blob/master/docs/scaling.md

> Note: When multiple instances send logs to the same output, the output can receive chunks of messages out of order. Some outputs tolerate this (for example, Elasticsearch), some do not, some require fine tuning (for example, Loki).
## Scaling SyslogNG

SyslogNG can be scaled up as well, but persistent disk buffers are not processed automatically when scaling the statefulset down. That is currently a manual process.

0 comments on commit e4b8853

Please sign in to comment.