-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CWS] add fentry fallback #33825
[CWS] add fentry fallback #33825
Conversation
pkg/security/probe/probe_ebpf.go
Outdated
if err := p.eventStream.Init(p.Manager, p.config.Probe); err != nil { | ||
return err | ||
} | ||
|
||
if err := p.initEBPFManager(); err != nil { | ||
if !p.config.Probe.EventStreamUseKprobeFallback { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also check whether useFentry
is false here in order to avoid reloading twice a failing kprobe mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for agent-configuration.
3eb88fe
to
30c10ac
Compare
Uncompressed package size comparisonComparison with ancestor Diff per package
Decision✅ Passed |
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv aws.create-vm --pipeline-id=55634133 --os-family=ubuntu Note: This applies to commit 0759539 |
Static quality checks ✅Please find below the results from static quality gates Info
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 4c272a6 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | quality_gate_logs | % cpu utilization | +0.78 | [-2.29, +3.85] | 1 | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | +0.43 | [+0.38, +0.48] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.14 | [-0.77, +1.04] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | +0.03 | [-0.89, +0.96] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | +0.01 | [-0.87, +0.89] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.00 | [-0.63, +0.64] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.02, +0.03] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | -0.00 | [-0.28, +0.27] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | -0.02 | [-0.97, +0.94] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | -0.02 | [-0.68, +0.63] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | -0.09 | [-0.14, -0.04] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_500ms_latency | egress throughput | -0.11 | [-0.89, +0.68] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | -0.13 | [-0.59, +0.34] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | -0.23 | [-0.26, -0.19] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.30 | [-1.07, +0.47] | 1 | Logs |
➖ | file_tree | memory utilization | -0.33 | [-0.39, -0.27] | 1 | Logs |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | intake_connections | 10/10 | |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
13f9b55
to
403010e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM but I wonder if we could have a test environment to validate the fallback or not, WDYT?
/merge |
View all feedbacks in Devflow UI. The median merge time in
This merge request conflicts with another merge request ahead in the queue. The merge requests in front of this one are: |
403010e
to
0759539
Compare
/merge |
View all feedbacks in Devflow UI.
The median merge time in
|
What does this PR do?
Add a fallback to kprobes when the eBPF manager fails to attach fentry.
Remove Setup function of the probe implementations.
Motivation
Describe how you validated your changes
Possible Drawbacks / Trade-offs
Additional Notes