-
Notifications
You must be signed in to change notification settings - Fork 914
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
>50% kube node CPU spike with Falco deployed #1710
Comments
Not sure if related but when I tried falco 0.26.0+ebpf, I also noticed performance degrade and syscall event drop in my cluster. After some investigation it looks like a huge amount of events coming from |
Is 0.28.0 affected by this issue too? |
This was discussed yesterday in the community call, and today I believe I can confirm the root cause here was the sysctl Setting this to |
Seems relevant to add this in main documentation, about tuning. wdyt? cc @leogr @danpopSD |
👋 @Issif I agree, and I made https://github.com/falcosecurity/falco/issues/1721 to track updating the docs 😄 |
Great finding, thank you all! @MattUebel Also thank you for having opened the issue. Btw, I will move it to the falco-website repository. Anyway, I agree we definitively need to update the documentation adding this case. Is anyone willing to make a PR? 😸 |
Issues go stale after 90d of inactivity. Mark the issue as fresh with Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with Provide feedback via https://github.com/falcosecurity/community. /lifecycle stale |
/close |
@stephanmiehe: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Describe the bug
We're seeing Falco cause a significant increase in CPU utilization when deployed as a daemonset to our kube clusters. We have removed all rules from the deployment to troubleshoot but the resource utilization remains the same which seems to indicate it's unrelated to our ruleset. We do have 1000+ pods and 100+ nodes. The largest jump is generally seen in the
kube-system
namespace yet around another 10 other namespaces also spike which are our custom application workloads.How to reproduce it
Deploy falco 0.29.1 to large kube cluster with eBPF probe enabled
Expected behaviour
Based on the documentation this CPU consumption is significantly higher than what is expected for a cluster this size. Would expect ~5% CPU increase.
Screenshots
Environment
Falco version: 0.29.1-2+6016c59
Driver version: f7029e2522cc4c81841817abeeeaa515ed944b6c
Additional context
The text was updated successfully, but these errors were encountered: