diff --git a/tests/results/dp-perf/1.5.0/1.5.0-oss.md b/tests/results/dp-perf/1.5.0/1.5.0-oss.md new file mode 100644 index 0000000000..fb581daa7d --- /dev/null +++ b/tests/results/dp-perf/1.5.0/1.5.0-oss.md @@ -0,0 +1,92 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 8624530af3c518afd8f7013566a102e8b3497b76 +- Date: 2024-11-11T18:50:09Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 16 +- RAM per node: 65853972Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance seems to have improved. There is improved latency and response times across all routing methods. + + +## Test1: Running latte path based routing + +```text +Requests [total, rate, throughput] 30000, 1000.03, 999.28 +Duration [total, attack, wait] 30s, 29.999s, 532.506µs +Latencies [min, mean, 50, 90, 95, 99, max] 368.077µs, 659.422µs, 631.038µs, 721.486µs, 756.087µs, 878.907µs, 12.742ms +Bytes In [total, mean] 4800660, 160.02 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.93% +Status Codes [code:count] 200:29978 503:22 +Error Set: +503 Service Temporarily Unavailable +``` + +## Test2: Running coffee header based routing + +```text +Requests [total, rate, throughput] 30000, 1000.03, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 611.932µs +Latencies [min, mean, 50, 90, 95, 99, max] 514.848µs, 666.682µs, 653.935µs, 741.683µs, 777.382µs, 867.041µs, 11.422ms +Bytes In [total, mean] 4830000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test3: Running coffee query based routing + +```text +Requests [total, rate, throughput] 30000, 1000.03, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 618.046µs +Latencies [min, mean, 50, 90, 95, 99, max] 511.713µs, 672.907µs, 658.846µs, 751.753µs, 786.911µs, 881.607µs, 10.507ms +Bytes In [total, mean] 5070000, 169.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test4: Running tea GET method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.99 +Duration [total, attack, wait] 30s, 30s, 597.097µs +Latencies [min, mean, 50, 90, 95, 99, max] 506.955µs, 651.103µs, 638.079µs, 720.439µs, 752.758µs, 828.588µs, 11.282ms +Bytes In [total, mean] 4740000, 158.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test5: Running tea POST method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.02, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 596.477µs +Latencies [min, mean, 50, 90, 95, 99, max] 503.899µs, 650.611µs, 639.013µs, 718.258µs, 748.085µs, 827.88µs, 9.075ms +Bytes In [total, mean] 4740000, 158.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/dp-perf/1.5.0/1.5.0-plus.md b/tests/results/dp-perf/1.5.0/1.5.0-plus.md new file mode 100644 index 0000000000..1b6a06d364 --- /dev/null +++ b/tests/results/dp-perf/1.5.0/1.5.0-plus.md @@ -0,0 +1,90 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: a0126a6435dd4bd69c1a7f48ee15eecb76c68400 +- Date: 2024-11-12T20:33:03Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 16 +- RAM per node: 65853972Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance seems consistent with previous test run. + +## Test1: Running latte path based routing + +```text +Requests [total, rate, throughput] 30000, 1000.02, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 676.331µs +Latencies [min, mean, 50, 90, 95, 99, max] 491.485µs, 689.253µs, 676.054µs, 771.129µs, 806.996µs, 909.616µs, 10.138ms +Bytes In [total, mean] 4800000, 160.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test2: Running coffee header based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.99 +Duration [total, attack, wait] 30s, 30s, 686.479µs +Latencies [min, mean, 50, 90, 95, 99, max] 533.29µs, 716.92µs, 703.946µs, 799.238µs, 835.966µs, 942.918µs, 11.356ms +Bytes In [total, mean] 4830000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test3: Running coffee query based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.98 +Duration [total, attack, wait] 30s, 30s, 682.739µs +Latencies [min, mean, 50, 90, 95, 99, max] 549.612µs, 724.458µs, 711.218µs, 810.286µs, 846.648µs, 953.929µs, 9.249ms +Bytes In [total, mean] 5070000, 169.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test4: Running tea GET method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.98 +Duration [total, attack, wait] 30.001s, 30s, 683.465µs +Latencies [min, mean, 50, 90, 95, 99, max] 528.936µs, 716.691µs, 698.583µs, 797.784µs, 834.023µs, 930.167µs, 16.219ms +Bytes In [total, mean] 4740000, 158.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test5: Running tea POST method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.99 +Duration [total, attack, wait] 30s, 30s, 719.615µs +Latencies [min, mean, 50, 90, 95, 99, max] 545.338µs, 715.216µs, 702.127µs, 799.224µs, 835.977µs, 940.498µs, 11.445ms +Bytes In [total, mean] 4740000, 158.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/longevity/1.5.0/oss-cpu.png b/tests/results/longevity/1.5.0/oss-cpu.png new file mode 100644 index 0000000000..e95802ca12 Binary files /dev/null and b/tests/results/longevity/1.5.0/oss-cpu.png differ diff --git a/tests/results/longevity/1.5.0/oss-memory.png b/tests/results/longevity/1.5.0/oss-memory.png new file mode 100644 index 0000000000..9e38c7c0d5 Binary files /dev/null and b/tests/results/longevity/1.5.0/oss-memory.png differ diff --git a/tests/results/longevity/1.5.0/oss-ngf-memory.png b/tests/results/longevity/1.5.0/oss-ngf-memory.png new file mode 100644 index 0000000000..830aeb5316 Binary files /dev/null and b/tests/results/longevity/1.5.0/oss-ngf-memory.png differ diff --git a/tests/results/longevity/1.5.0/oss-reload-time.png b/tests/results/longevity/1.5.0/oss-reload-time.png new file mode 100644 index 0000000000..80d6346e42 Binary files /dev/null and b/tests/results/longevity/1.5.0/oss-reload-time.png differ diff --git a/tests/results/longevity/1.5.0/oss-reloads.png b/tests/results/longevity/1.5.0/oss-reloads.png new file mode 100644 index 0000000000..cf031de06c Binary files /dev/null and b/tests/results/longevity/1.5.0/oss-reloads.png differ diff --git a/tests/results/longevity/1.5.0/oss-stub-status.png b/tests/results/longevity/1.5.0/oss-stub-status.png new file mode 100644 index 0000000000..7612ddc1c2 Binary files /dev/null and b/tests/results/longevity/1.5.0/oss-stub-status.png differ diff --git a/tests/results/longevity/1.5.0/oss.md b/tests/results/longevity/1.5.0/oss.md new file mode 100644 index 0000000000..343454e449 --- /dev/null +++ b/tests/results/longevity/1.5.0/oss.md @@ -0,0 +1,99 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 36f245bcba55935064324ff5803d66110117f7da +- Date: 2024-11-08T19:20:48Z +- Dirty: false + +GKE Cluster: + +- Node count: 2 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 2 +- RAM per node: 4018120Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: e2-medium + +## Traffic + +HTTP: + +```text +Running 5760m test @ http://cafe.example.com/coffee + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 236.88ms 177.22ms 2.00s 72.93% + Req/Sec 232.09 156.40 1.90k 66.16% + 156451087 requests in 5760.00m, 53.52GB read + Socket errors: connect 0, read 350645, write 0, timeout 75472 +Requests/sec: 452.69 +Transfer/sec: 162.39KB +KB +``` + +HTTPS: + +```text +Running 5760m test @ https://cafe.example.com/tea + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 223.09ms 138.95ms 2.00s 63.95% + Req/Sec 230.23 155.14 1.80k 66.18% + 155166081 requests in 5760.00m, 52.20GB read + Socket errors: connect 0, read 345712, write 0, timeout 176 +Requests/sec: 448.98 +Transfer/sec: 158.37KB + +``` + + +### Logs + +No error logs in nginx-gateway + +Error logs in nginx + +We could not get non-2xx errors from the cluster but should likely be similar to last release's issues. + +### Key Metrics + +#### Containers memory + +![oss-memory.png](oss-memory.png) + +#### NGF Container Memory + +![oss-ngf-memory.png](oss-ngf-memory.png) + +### Containers CPU + +![oss-cpu.png](oss-cpu.png) + +### NGINX metrics + +![oss-stub-status.png](oss-stub-status.png) + +### Reloads + +Rate of reloads - successful and errors: + +![oss-reloads.png](oss-reloads.png) + +Reload spikes correspond to 1 hour periods of backend re-rollouts. + +No reloads finished with an error. + +Reload time distribution - counts: + +![oss-reload-time.png](oss-reload-time.png) + + +## Comparison with previous runs + +Graphs look similar to 1.4.0 results. diff --git a/tests/results/longevity/1.5.0/plus-cpu.png b/tests/results/longevity/1.5.0/plus-cpu.png new file mode 100644 index 0000000000..041bfcd218 Binary files /dev/null and b/tests/results/longevity/1.5.0/plus-cpu.png differ diff --git a/tests/results/longevity/1.5.0/plus-memory.png b/tests/results/longevity/1.5.0/plus-memory.png new file mode 100644 index 0000000000..968aaad7aa Binary files /dev/null and b/tests/results/longevity/1.5.0/plus-memory.png differ diff --git a/tests/results/longevity/1.5.0/plus-ngf-memory.png b/tests/results/longevity/1.5.0/plus-ngf-memory.png new file mode 100644 index 0000000000..0fc499287a Binary files /dev/null and b/tests/results/longevity/1.5.0/plus-ngf-memory.png differ diff --git a/tests/results/longevity/1.5.0/plus-reloads.png b/tests/results/longevity/1.5.0/plus-reloads.png new file mode 100644 index 0000000000..117d84263a Binary files /dev/null and b/tests/results/longevity/1.5.0/plus-reloads.png differ diff --git a/tests/results/longevity/1.5.0/plus-status.png b/tests/results/longevity/1.5.0/plus-status.png new file mode 100644 index 0000000000..ced70dccb2 Binary files /dev/null and b/tests/results/longevity/1.5.0/plus-status.png differ diff --git a/tests/results/longevity/1.5.0/plus.md b/tests/results/longevity/1.5.0/plus.md new file mode 100644 index 0000000000..dd26556ed8 --- /dev/null +++ b/tests/results/longevity/1.5.0/plus.md @@ -0,0 +1,91 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 36f245bcba55935064324ff5803d66110117f7da +- Date: 2024-11-08T19:20:48Z +- Dirty: false + +GKE Cluster: + +- Node count: 2 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 2 +- RAM per node: 4018120Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: e2-medium + +## Traffic + +HTTP: + +```text +Running 5760m test @ http://cafe.example.com/coffee + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 228.08ms 136.20ms 1.92s 63.92% + Req/Sec 232.02 153.44 1.71k 66.90% + 156457702 requests in 5760.00m, 53.53GB read + Non-2xx or 3xx responses: 5 +Requests/sec: 452.71 +Transfer/sec: 162.41KB +``` + +HTTPS: + +```text +Running 5760m test @ https://cafe.example.com/tea + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 229.75ms 136.23ms 1.92s 63.81% + Req/Sec 229.91 151.31 1.63k 66.59% + 155060805 requests in 5760.00m, 52.19GB read + Non-2xx or 3xx responses: 3 +Requests/sec: 448.67 +Transfer/sec: 158.33KB +``` + +### Logs + +No error logs in nginx-gateway + +Error logs in nginx + +We could not get non-2xx errors from the cluster but should likely be similar to last release's issues. + + +### Key Metrics + +#### Containers memory + +![plus-memory.png](plus-memory.png) + +#### NGF Container Memory + +![plus-ngf-memory.png](plus-ngf-memory.png) + +### Containers CPU + +![plus-cpu.png](plus-cpu.png) + +### NGINX Plus metrics + +![plus-status.png](plus-status.png) + +### Reloads + +Rate of reloads - successful and errors: + +![plus-reloads.png](plus-reloads.png) + +Note: compared to NGINX, we don't have as many reloads here, because NGF uses NGINX Plus API to reconfigure NGINX +for endpoints changes. + +## Comparison with previous runs + +Graphs look similar to 1.4.0 results. diff --git a/tests/results/ngf-upgrade/1.5.0/1.5.0-oss.md b/tests/results/ngf-upgrade/1.5.0/1.5.0-oss.md new file mode 100644 index 0000000000..9492275003 --- /dev/null +++ b/tests/results/ngf-upgrade/1.5.0/1.5.0-oss.md @@ -0,0 +1,55 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 8624530af3c518afd8f7013566a102e8b3497b76 +- Date: 2024-11-11T18:50:09Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 16 +- RAM per node: 65853972Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance has slightly improved reduced latencies across both HTTPS and HTTP traffic. + +## Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 6000, 100.02, 100.01 +Duration [total, attack, wait] 59.992s, 59.991s, 932.645µs +Latencies [min, mean, 50, 90, 95, 99, max] 423.711µs, 787.549µs, 794.414µs, 912.095µs, 954.864µs, 1.145ms, 5.769ms +Bytes In [total, mean] 936000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:6000 +Error Set: +``` + +![https-oss.png](https-oss.png) + +## Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 6000, 100.02, 100.01 +Duration [total, attack, wait] 59.991s, 59.991s, 841.48µs +Latencies [min, mean, 50, 90, 95, 99, max] 571.076µs, 810.669µs, 797.263µs, 906.628µs, 949.507µs, 1.075ms, 4.51ms +Bytes In [total, mean] 972000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:6000 +Error Set: +``` + +![http-oss.png](http-oss.png) diff --git a/tests/results/ngf-upgrade/1.5.0/1.5.0-plus.md b/tests/results/ngf-upgrade/1.5.0/1.5.0-plus.md new file mode 100644 index 0000000000..81c0073171 --- /dev/null +++ b/tests/results/ngf-upgrade/1.5.0/1.5.0-plus.md @@ -0,0 +1,55 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: a0126a6435dd4bd69c1a7f48ee15eecb76c68400 +- Date: 2024-11-12T20:33:03Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 16 +- RAM per node: 65853972Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance has slightly improved. + +## Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 6000, 100.02, 100.01 +Duration [total, attack, wait] 59.991s, 59.99s, 913.55µs +Latencies [min, mean, 50, 90, 95, 99, max] 641.837µs, 869.912µs, 849.956µs, 964.838µs, 1.013ms, 1.148ms, 6.51ms +Bytes In [total, mean] 930000, 155.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:6000 +Error Set: +``` + +![https-plus.png](https-plus.png) + +## Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 6000, 100.02, 100.02 +Duration [total, attack, wait] 59.991s, 59.99s, 598.948µs +Latencies [min, mean, 50, 90, 95, 99, max] 462.116µs, 857.769µs, 840.074µs, 963.374µs, 1.013ms, 1.155ms, 19.413ms +Bytes In [total, mean] 966000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:6000 +Error Set: +``` + +![http-plus.png](http-plus.png) diff --git a/tests/results/ngf-upgrade/1.5.0/http-oss.png b/tests/results/ngf-upgrade/1.5.0/http-oss.png new file mode 100644 index 0000000000..59cc11af97 Binary files /dev/null and b/tests/results/ngf-upgrade/1.5.0/http-oss.png differ diff --git a/tests/results/ngf-upgrade/1.5.0/http-plus.png b/tests/results/ngf-upgrade/1.5.0/http-plus.png new file mode 100644 index 0000000000..f4e6d5ac04 Binary files /dev/null and b/tests/results/ngf-upgrade/1.5.0/http-plus.png differ diff --git a/tests/results/ngf-upgrade/1.5.0/https-oss.png b/tests/results/ngf-upgrade/1.5.0/https-oss.png new file mode 100644 index 0000000000..59cc11af97 Binary files /dev/null and b/tests/results/ngf-upgrade/1.5.0/https-oss.png differ diff --git a/tests/results/ngf-upgrade/1.5.0/https-plus.png b/tests/results/ngf-upgrade/1.5.0/https-plus.png new file mode 100644 index 0000000000..f4e6d5ac04 Binary files /dev/null and b/tests/results/ngf-upgrade/1.5.0/https-plus.png differ diff --git a/tests/results/reconfig/1.5.0/1.5.0-oss.md b/tests/results/reconfig/1.5.0/1.5.0-oss.md new file mode 100644 index 0000000000..647d7b597c --- /dev/null +++ b/tests/results/reconfig/1.5.0/1.5.0-oss.md @@ -0,0 +1,193 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 8624530af3c518afd8f7013566a102e8b3497b76 +- Date: 2024-11-11T18:50:09Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 16 +- RAM per node: 65853972Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance seems consistent with previous run. + +## Test 1: Resources exist before startup - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 3s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 2 +- NGINX Reload Average Time: 126ms +- Reload distribution: + - 500ms: 2 + - 1000ms: 2 + - 5000ms: 2 + - 10000ms: 2 + - 30000ms: 2 + - +Infms: 2 + +### Event Batch Processing + +- Event Batch Total: 6 +- Event Batch Processing Average Time: 51ms +- Event Batch Processing distribution: + - 500ms: 6 + - 1000ms: 6 + - 5000ms: 6 + - 10000ms: 6 + - 30000ms: 6 + - +Infms: 6 + +## Test 1: Resources exist before startup - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 2 +- NGINX Reload Average Time: 138ms +- Reload distribution: + - 500ms: 2 + - 1000ms: 2 + - 5000ms: 2 + - 10000ms: 2 + - 30000ms: 2 + - +Infms: 2 + +### Event Batch Processing + +- Event Batch Total: 6 +- Event Batch Processing Average Time: 55ms +- Event Batch Processing distribution: + - 500ms: 6 + - 1000ms: 6 + - 5000ms: 6 + - 10000ms: 6 + - 30000ms: 6 + - +Infms: 6 + +## Test 2: Start NGF, deploy Gateway, create many resources attached to GW - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 7s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 52 +- NGINX Reload Average Time: 150ms +- Reload distribution: + - 500ms: 52 + - 1000ms: 52 + - 5000ms: 52 + - 10000ms: 52 + - 30000ms: 52 + - +Infms: 52 + +### Event Batch Processing + +- Event Batch Total: 328 +- Event Batch Processing Average Time: 24ms +- Event Batch Processing distribution: + - 500ms: 328 + - 1000ms: 328 + - 5000ms: 328 + - 10000ms: 328 + - 30000ms: 328 + - +Infms: 328 + +## Test 2: Start NGF, deploy Gateway, create many resources attached to GW - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 44s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 283 +- NGINX Reload Average Time: 152ms +- Reload distribution: + - 500ms: 283 + - 1000ms: 283 + - 5000ms: 283 + - 10000ms: 283 + - 30000ms: 283 + - +Infms: 283 + +### Event Batch Processing + +- Event Batch Total: 1638 +- Event Batch Processing Average Time: 26ms +- Event Batch Processing distribution: + - 500ms: 1638 + - 1000ms: 1638 + - 5000ms: 1638 + - 10000ms: 1638 + - 30000ms: 1638 + - +Infms: 1638 + +## Test 3: Start NGF, create many resources attached to a Gateway, deploy the Gateway - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: < 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 55 +- NGINX Reload Average Time: 148ms +- Reload distribution: + - 500ms: 55 + - 1000ms: 55 + - 5000ms: 55 + - 10000ms: 55 + - 30000ms: 55 + - +Infms: 55 + +### Event Batch Processing + +- Event Batch Total: 295 +- Event Batch Processing Average Time: 28ms +- Event Batch Processing distribution: + - 500ms: 295 + - 1000ms: 295 + - 5000ms: 295 + - 10000ms: 295 + - 30000ms: 295 + - +Infms: 295 + +## Test 3: Start NGF, create many resources attached to a Gateway, deploy the Gateway - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: < 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 291 +- NGINX Reload Average Time: 150ms +- Reload distribution: + - 500ms: 291 + - 1000ms: 291 + - 5000ms: 291 + - 10000ms: 291 + - 30000ms: 291 + - +Infms: 291 + +### Event Batch Processing + +- Event Batch Total: 1484 +- Event Batch Processing Average Time: 29ms +- Event Batch Processing distribution: + - 500ms: 1484 + - 1000ms: 1484 + - 5000ms: 1484 + - 10000ms: 1484 + - 30000ms: 1484 + - +Infms: 1484 diff --git a/tests/results/reconfig/1.5.0/1.5.0-plus.md b/tests/results/reconfig/1.5.0/1.5.0-plus.md new file mode 100644 index 0000000000..dd9f8d8bc7 --- /dev/null +++ b/tests/results/reconfig/1.5.0/1.5.0-plus.md @@ -0,0 +1,193 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: a0126a6435dd4bd69c1a7f48ee15eecb76c68400 +- Date: 2024-11-12T20:33:03Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 16 +- RAM per node: 65853972Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance seems to be consistent with previous run but has slightly higher reload and event processing times. + +## Test 1: Resources exist before startup - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 2 +- NGINX Reload Average Time: 125ms +- Reload distribution: + - 500ms: 2 + - 1000ms: 2 + - 5000ms: 2 + - 10000ms: 2 + - 30000ms: 2 + - +Infms: 2 + +### Event Batch Processing + +- Event Batch Total: 6 +- Event Batch Processing Average Time: 57ms +- Event Batch Processing distribution: + - 500ms: 6 + - 1000ms: 6 + - 5000ms: 6 + - 10000ms: 6 + - 30000ms: 6 + - +Infms: 6 + +## Test 1: Resources exist before startup - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 2 +- NGINX Reload Average Time: 125ms +- Reload distribution: + - 500ms: 2 + - 1000ms: 2 + - 5000ms: 2 + - 10000ms: 2 + - 30000ms: 2 + - +Infms: 2 + +### Event Batch Processing + +- Event Batch Total: 6 +- Event Batch Processing Average Time: 58ms +- Event Batch Processing distribution: + - 500ms: 6 + - 1000ms: 6 + - 5000ms: 6 + - 10000ms: 6 + - 30000ms: 6 + - +Infms: 6 + +## Test 2: Start NGF, deploy Gateway, create many resources attached to GW - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 7s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 45 +- NGINX Reload Average Time: 153ms +- Reload distribution: + - 500ms: 45 + - 1000ms: 45 + - 5000ms: 45 + - 10000ms: 45 + - 30000ms: 45 + - +Infms: 45 + +### Event Batch Processing + +- Event Batch Total: 321 +- Event Batch Processing Average Time: 25ms +- Event Batch Processing distribution: + - 500ms: 321 + - 1000ms: 321 + - 5000ms: 321 + - 10000ms: 321 + - 30000ms: 321 + - +Infms: 321 + +## Test 2: Start NGF, deploy Gateway, create many resources attached to GW - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: 44s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 233 +- NGINX Reload Average Time: 158ms +- Reload distribution: + - 500ms: 233 + - 1000ms: 233 + - 5000ms: 233 + - 10000ms: 233 + - 30000ms: 233 + - +Infms: 233 + +### Event Batch Processing + +- Event Batch Total: 1588 +- Event Batch Processing Average Time: 27ms +- Event Batch Processing distribution: + - 500ms: 1588 + - 1000ms: 1588 + - 5000ms: 1588 + - 10000ms: 1588 + - 30000ms: 1588 + - +Infms: 1588 + +## Test 3: Start NGF, create many resources attached to a Gateway, deploy the Gateway - NumResources 30 + +### Reloads and Time to Ready + +- TimeToReadyTotal: < 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 44 +- NGINX Reload Average Time: 150ms +- Reload distribution: + - 500ms: 44 + - 1000ms: 44 + - 5000ms: 44 + - 10000ms: 44 + - 30000ms: 44 + - +Infms: 44 + +### Event Batch Processing + +- Event Batch Total: 283 +- Event Batch Processing Average Time: 29ms +- Event Batch Processing distribution: + - 500ms: 283 + - 1000ms: 283 + - 5000ms: 283 + - 10000ms: 283 + - 30000ms: 283 + - +Infms: 283 + +## Test 3: Start NGF, create many resources attached to a Gateway, deploy the Gateway - NumResources 150 + +### Reloads and Time to Ready + +- TimeToReadyTotal: < 1s +- TimeToReadyAvgSingle: < 1s +- NGINX Reloads: 227 +- NGINX Reload Average Time: 151ms +- Reload distribution: + - 500ms: 227 + - 1000ms: 227 + - 5000ms: 227 + - 10000ms: 227 + - 30000ms: 227 + - +Infms: 227 + +### Event Batch Processing + +- Event Batch Total: 1414 +- Event Batch Processing Average Time: 31ms +- Event Batch Processing distribution: + - 500ms: 1413 + - 1000ms: 1414 + - 5000ms: 1414 + - 10000ms: 1414 + - 30000ms: 1414 + - +Infms: 1414 diff --git a/tests/results/scale/1.5.0/1.5.0-oss.md b/tests/results/scale/1.5.0/1.5.0-oss.md new file mode 100644 index 0000000000..972cb40e94 --- /dev/null +++ b/tests/results/scale/1.5.0/1.5.0-oss.md @@ -0,0 +1,205 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 8624530af3c518afd8f7013566a102e8b3497b76 +- Date: 2024-11-11T18:50:09Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 16 +- RAM per node: 65853972Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance seems consistent with previous test run with slightly higher reload times and some errors. + +## Test TestScale_Listeners + +### Reloads + +- Total: 125 +- Total Errors: 0 +- Average Time: 288ms +- Reload distribution: + - 500ms: 125 + - 1000ms: 125 + - 5000ms: 125 + - 10000ms: 125 + - 30000ms: 125 + - +Infms: 125 + +### Event Batch Processing + +- Total: 383 +- Average Time: 173ms +- Event Batch Processing distribution: + - 500ms: 320 + - 1000ms: 380 + - 5000ms: 383 + - 10000ms: 383 + - 30000ms: 383 + - +Infms: 383 + +### Errors + +- NGF errors: 3 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_Listeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPSListeners + +### Reloads + +- Total: 127 +- Total Errors: 0 +- Average Time: 370ms +- Reload distribution: + - 500ms: 98 + - 1000ms: 127 + - 5000ms: 127 + - 10000ms: 127 + - 30000ms: 127 + - +Infms: 127 + +### Event Batch Processing + +- Total: 449 +- Average Time: 178ms +- Event Batch Processing distribution: + - 500ms: 374 + - 1000ms: 430 + - 5000ms: 449 + - 10000ms: 449 + - 30000ms: 449 + - +Infms: 449 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPSListeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPRoutes + +### Reloads + +- Total: 1001 +- Total Errors: 0 +- Average Time: 2579ms +- Reload distribution: + - 500ms: 76 + - 1000ms: 179 + - 5000ms: 972 + - 10000ms: 1001 + - 30000ms: 1001 + - +Infms: 1001 + +### Event Batch Processing + +- Total: 1008 +- Average Time: 2651ms +- Event Batch Processing distribution: + - 500ms: 76 + - 1000ms: 178 + - 5000ms: 963 + - 10000ms: 1008 + - 30000ms: 1008 + - +Infms: 1008 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPRoutes) for more details. +The logs are attached only if there are errors. + +## Test TestScale_UpstreamServers + +### Reloads + +- Total: 141 +- Total Errors: 0 +- Average Time: 151ms +- Reload distribution: + - 500ms: 141 + - 1000ms: 141 + - 5000ms: 141 + - 10000ms: 141 + - 30000ms: 141 + - +Infms: 141 + +### Event Batch Processing + +- Total: 144 +- Average Time: 150ms +- Event Batch Processing distribution: + - 500ms: 144 + - 1000ms: 144 + - 5000ms: 144 + - 10000ms: 144 + - 30000ms: 144 + - +Infms: 144 + +### Errors + +- NGF errors: 1 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_UpstreamServers) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPMatches + +```text +Requests [total, rate, throughput] 30000, 1000.03, 997.85 +Duration [total, attack, wait] 29.999s, 29.999s, 423.38µs +Latencies [min, mean, 50, 90, 95, 99, max] 287.105µs, 466.923µs, 451.368µs, 519.59µs, 560.608µs, 710.54µs, 14.207ms +Bytes In [total, mean] 4831885, 161.06 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.78% +Status Codes [code:count] 200:29935 503:65 +Error Set: +503 Service Temporarily Unavailable +``` +```text +Requests [total, rate, throughput] 30000, 1000.06, 999.83 +Duration [total, attack, wait] 30.005s, 29.998s, 6.772ms +Latencies [min, mean, 50, 90, 95, 99, max] 431.026µs, 3.12ms, 3.405ms, 4.47ms, 5.98ms, 9.072ms, 38.733ms +Bytes In [total, mean] 4830000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/scale/1.5.0/1.5.0-plus.md b/tests/results/scale/1.5.0/1.5.0-plus.md new file mode 100644 index 0000000000..69db58d87c --- /dev/null +++ b/tests/results/scale/1.5.0/1.5.0-plus.md @@ -0,0 +1,204 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: a0126a6435dd4bd69c1a7f48ee15eecb76c68400 +- Date: 2024-11-12T20:33:03Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 16 +- RAM per node: 65853972Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance seems consistent with previous run but slightly higher reload and event batch processing times. + +## Test TestScale_Listeners + +### Reloads + +- Total: 127 +- Total Errors: 0 +- Average Time: 235ms +- Reload distribution: + - 500ms: 127 + - 1000ms: 127 + - 5000ms: 127 + - 10000ms: 127 + - 30000ms: 127 + - +Infms: 127 + +### Event Batch Processing + +- Total: 385 +- Average Time: 172ms +- Event Batch Processing distribution: + - 500ms: 329 + - 1000ms: 377 + - 5000ms: 385 + - 10000ms: 385 + - 30000ms: 385 + - +Infms: 385 + +### Errors + +- NGF errors: 1 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_Listeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPSListeners + +### Reloads + +- Total: 128 +- Total Errors: 0 +- Average Time: 259ms +- Reload distribution: + - 500ms: 128 + - 1000ms: 128 + - 5000ms: 128 + - 10000ms: 128 + - 30000ms: 128 + - +Infms: 128 + +### Event Batch Processing + +- Total: 451 +- Average Time: 152ms +- Event Batch Processing distribution: + - 500ms: 385 + - 1000ms: 446 + - 5000ms: 451 + - 10000ms: 451 + - 30000ms: 451 + - +Infms: 451 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPSListeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPRoutes + +### Reloads + +- Total: 1001 +- Total Errors: 0 +- Average Time: 1504ms +- Reload distribution: + - 500ms: 138 + - 1000ms: 323 + - 5000ms: 1001 + - 10000ms: 1001 + - 30000ms: 1001 + - +Infms: 1001 + +### Event Batch Processing + +- Total: 1008 +- Average Time: 1628ms +- Event Batch Processing distribution: + - 500ms: 119 + - 1000ms: 292 + - 5000ms: 1008 + - 10000ms: 1008 + - 30000ms: 1008 + - +Infms: 1008 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPRoutes) for more details. +The logs are attached only if there are errors. + +## Test TestScale_UpstreamServers + +### Reloads + +- Total: 2 +- Total Errors: 0 +- Average Time: 151ms +- Reload distribution: + - 500ms: 2 + - 1000ms: 2 + - 5000ms: 2 + - 10000ms: 2 + - 30000ms: 2 + - +Infms: 2 + +### Event Batch Processing + +- Total: 61 +- Average Time: 302ms +- Event Batch Processing distribution: + - 500ms: 53 + - 1000ms: 61 + - 5000ms: 61 + - 10000ms: 61 + - 30000ms: 61 + - +Infms: 61 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_UpstreamServers) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPMatches + +```text +Requests [total, rate, throughput] 30000, 1000.02, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 778.078µs +Latencies [min, mean, 50, 90, 95, 99, max] 512.98µs, 706.046µs, 691.467µs, 791.864µs, 831.588µs, 945.759µs, 9.526ms +Bytes In [total, mean] 4830000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 692.053µs +Latencies [min, mean, 50, 90, 95, 99, max] 589.204µs, 787.29µs, 769.146µs, 886.085µs, 929.108µs, 1.044ms, 12.187ms +Bytes In [total, mean] 4830000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/scale/1.5.0/TestScale_HTTPRoutes/cpu-oss.png b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/cpu-oss.png new file mode 100644 index 0000000000..b4fde3af38 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/cpu-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPRoutes/cpu-plus.png b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/cpu-plus.png new file mode 100644 index 0000000000..723b1f1721 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/cpu-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPRoutes/memory-oss.png b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/memory-oss.png new file mode 100644 index 0000000000..8e7e0f8677 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/memory-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPRoutes/memory-plus.png b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/memory-plus.png new file mode 100644 index 0000000000..074ddd2687 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/memory-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPRoutes/ttr-oss.png b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/ttr-oss.png new file mode 100644 index 0000000000..9f24a3cbc9 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/ttr-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPRoutes/ttr-plus.png b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/ttr-plus.png new file mode 100644 index 0000000000..bbd92a16a0 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPRoutes/ttr-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPSListeners/cpu-oss.png b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/cpu-oss.png new file mode 100644 index 0000000000..8b37f04fc0 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/cpu-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPSListeners/cpu-plus.png b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/cpu-plus.png new file mode 100644 index 0000000000..2ab85ec2bd Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/cpu-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPSListeners/memory-oss.png b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/memory-oss.png new file mode 100644 index 0000000000..7db5279c95 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/memory-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPSListeners/memory-plus.png b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/memory-plus.png new file mode 100644 index 0000000000..653a75a169 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/memory-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPSListeners/ngf-oss.log b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/ngf-oss.log new file mode 100644 index 0000000000..652341ddf1 --- /dev/null +++ b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/ngf-oss.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2024-11-11T20:55:07Z","logger":"controller-runtime.healthz","msg":"healthz check failed","checker":"readyz","error":"nginx has not yet become ready to accept traffic"} +{"level":"debug","ts":"2024-11-11T20:56:18Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/1.5.0/TestScale_HTTPSListeners/ttr-oss.png b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/ttr-oss.png new file mode 100644 index 0000000000..4591a73a67 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/ttr-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_HTTPSListeners/ttr-plus.png b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/ttr-plus.png new file mode 100644 index 0000000000..b7a7537d53 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_HTTPSListeners/ttr-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_Listeners/cpu-oss.png b/tests/results/scale/1.5.0/TestScale_Listeners/cpu-oss.png new file mode 100644 index 0000000000..e02c0f6882 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_Listeners/cpu-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_Listeners/cpu-plus.png b/tests/results/scale/1.5.0/TestScale_Listeners/cpu-plus.png new file mode 100644 index 0000000000..eb6483837a Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_Listeners/cpu-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_Listeners/memory-oss.png b/tests/results/scale/1.5.0/TestScale_Listeners/memory-oss.png new file mode 100644 index 0000000000..15b70227f7 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_Listeners/memory-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_Listeners/memory-plus.png b/tests/results/scale/1.5.0/TestScale_Listeners/memory-plus.png new file mode 100644 index 0000000000..4aee8be406 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_Listeners/memory-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_Listeners/ngf-oss.log b/tests/results/scale/1.5.0/TestScale_Listeners/ngf-oss.log new file mode 100644 index 0000000000..b464cbba13 --- /dev/null +++ b/tests/results/scale/1.5.0/TestScale_Listeners/ngf-oss.log @@ -0,0 +1,3 @@ +{"level":"debug","ts":"2024-11-11T20:52:02Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} +{"level":"debug","ts":"2024-11-11T20:52:20Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} +{"level":"debug","ts":"2024-11-11T20:52:22Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/1.5.0/TestScale_Listeners/ngf-plus.log b/tests/results/scale/1.5.0/TestScale_Listeners/ngf-plus.log new file mode 100644 index 0000000000..0e24cd1f5d --- /dev/null +++ b/tests/results/scale/1.5.0/TestScale_Listeners/ngf-plus.log @@ -0,0 +1 @@ +{"level":"debug","ts":"2024-11-13T06:45:33Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/1.5.0/TestScale_Listeners/ttr-oss.png b/tests/results/scale/1.5.0/TestScale_Listeners/ttr-oss.png new file mode 100644 index 0000000000..c198c4be75 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_Listeners/ttr-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_Listeners/ttr-plus.png b/tests/results/scale/1.5.0/TestScale_Listeners/ttr-plus.png new file mode 100644 index 0000000000..c003466cf6 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_Listeners/ttr-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_UpstreamServers/cpu-oss.png b/tests/results/scale/1.5.0/TestScale_UpstreamServers/cpu-oss.png new file mode 100644 index 0000000000..9687a0518d Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_UpstreamServers/cpu-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_UpstreamServers/cpu-plus.png b/tests/results/scale/1.5.0/TestScale_UpstreamServers/cpu-plus.png new file mode 100644 index 0000000000..699bf26517 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_UpstreamServers/cpu-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_UpstreamServers/memory-oss.png b/tests/results/scale/1.5.0/TestScale_UpstreamServers/memory-oss.png new file mode 100644 index 0000000000..468c2c3164 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_UpstreamServers/memory-oss.png differ diff --git a/tests/results/scale/1.5.0/TestScale_UpstreamServers/memory-plus.png b/tests/results/scale/1.5.0/TestScale_UpstreamServers/memory-plus.png new file mode 100644 index 0000000000..fd296477d8 Binary files /dev/null and b/tests/results/scale/1.5.0/TestScale_UpstreamServers/memory-plus.png differ diff --git a/tests/results/scale/1.5.0/TestScale_UpstreamServers/ngf-oss.log b/tests/results/scale/1.5.0/TestScale_UpstreamServers/ngf-oss.log new file mode 100644 index 0000000000..3e330bff38 --- /dev/null +++ b/tests/results/scale/1.5.0/TestScale_UpstreamServers/ngf-oss.log @@ -0,0 +1 @@ +{"level":"info","ts":"2024-11-11T21:47:21Z","msg":"k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: watch of *v1.EndpointSlice ended with: an error on the server (\"unable to decode an event from the watch stream: got short buffer with n=0, base=4092, cap=81920\") has prevented the request from succeeding"} diff --git a/tests/results/zero-downtime-scale/1.5.0/1.5.0-oss.md b/tests/results/zero-downtime-scale/1.5.0/1.5.0-oss.md new file mode 100644 index 0000000000..b0c343ca8d --- /dev/null +++ b/tests/results/zero-downtime-scale/1.5.0/1.5.0-oss.md @@ -0,0 +1,286 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 8624530af3c518afd8f7013566a102e8b3497b76 +- Date: 2024-11-11T18:50:09Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 16 +- RAM per node: 65853972Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance seems consistent with previous run. +- No errors seen. + +## One NGF Pod runs per node Test Results + +### Scale Up Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 736.494µs +Latencies [min, mean, 50, 90, 95, 99, max] 436.247µs, 824.285µs, 814.086µs, 930.637µs, 978.866µs, 1.304ms, 33.017ms +Bytes In [total, mean] 4680000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-https-oss.png](gradual-scale-up-affinity-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 819.08µs +Latencies [min, mean, 50, 90, 95, 99, max] 382.043µs, 791.086µs, 788.636µs, 901.854µs, 946.77µs, 1.25ms, 12.511ms +Bytes In [total, mean] 4860000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-http-oss.png](gradual-scale-up-affinity-http-oss.png) + +### Scale Down Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 1.086ms +Latencies [min, mean, 50, 90, 95, 99, max] 417.245µs, 835.795µs, 826.937µs, 945.777µs, 993.897µs, 1.308ms, 16.636ms +Bytes In [total, mean] 7488000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-https-oss.png](gradual-scale-down-affinity-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 894.604µs +Latencies [min, mean, 50, 90, 95, 99, max] 407.723µs, 799.6µs, 797.645µs, 912.557µs, 956.655µs, 1.223ms, 8.118ms +Bytes In [total, mean] 7776000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-http-oss.png](gradual-scale-down-affinity-http-oss.png) + +### Scale Up Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 766.276µs +Latencies [min, mean, 50, 90, 95, 99, max] 435.464µs, 833.151µs, 822.36µs, 939.638µs, 986.461µs, 1.282ms, 15.865ms +Bytes In [total, mean] 1872000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-https-oss.png](abrupt-scale-up-affinity-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 858.794µs +Latencies [min, mean, 50, 90, 95, 99, max] 400.862µs, 804.877µs, 804.151µs, 920.27µs, 962.511µs, 1.133ms, 10.43ms +Bytes In [total, mean] 1944000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-http-oss.png](abrupt-scale-up-affinity-http-oss.png) + +### Scale Down Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 978.797µs +Latencies [min, mean, 50, 90, 95, 99, max] 409.596µs, 848.432µs, 844.079µs, 966.023µs, 1.014ms, 1.17ms, 6.333ms +Bytes In [total, mean] 1944000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-http-oss.png](abrupt-scale-down-affinity-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 839.728µs +Latencies [min, mean, 50, 90, 95, 99, max] 444.844µs, 871.674µs, 863.095µs, 986.094µs, 1.034ms, 1.184ms, 8.749ms +Bytes In [total, mean] 1872000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-https-oss.png](abrupt-scale-down-affinity-https-oss.png) + +## Multiple NGF Pods run per node Test Results + +### Scale Up Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 766.439µs +Latencies [min, mean, 50, 90, 95, 99, max] 407.721µs, 838.975µs, 822.181µs, 946.504µs, 1ms, 1.381ms, 18.536ms +Bytes In [total, mean] 4680000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-https-oss.png](gradual-scale-up-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 734.956µs +Latencies [min, mean, 50, 90, 95, 99, max] 399.283µs, 816.981µs, 803.143µs, 926.274µs, 982.671µs, 1.355ms, 22.58ms +Bytes In [total, mean] 4860000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-http-oss.png](gradual-scale-up-http-oss.png) + +### Scale Down Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 777.329µs +Latencies [min, mean, 50, 90, 95, 99, max] 413.572µs, 839.155µs, 825.647µs, 964.872µs, 1.02ms, 1.321ms, 20.94ms +Bytes In [total, mean] 14976000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-https-oss.png](gradual-scale-down-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 1.117ms +Latencies [min, mean, 50, 90, 95, 99, max] 395.98µs, 813.203µs, 804.792µs, 938.257µs, 989.728µs, 1.298ms, 23.009ms +Bytes In [total, mean] 15552000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-http-oss.png](gradual-scale-down-http-oss.png) + +### Scale Up Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 930.383µs +Latencies [min, mean, 50, 90, 95, 99, max] 404.832µs, 828.908µs, 814.937µs, 946.579µs, 1.001ms, 1.243ms, 23.067ms +Bytes In [total, mean] 1872000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-https-oss.png](abrupt-scale-up-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 884.074µs +Latencies [min, mean, 50, 90, 95, 99, max] 424.027µs, 809.266µs, 798.651µs, 925.351µs, 973.343µs, 1.202ms, 19.003ms +Bytes In [total, mean] 1944000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-http-oss.png](abrupt-scale-up-http-oss.png) + +### Scale Down Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 948.393µs +Latencies [min, mean, 50, 90, 95, 99, max] 401.641µs, 807.419µs, 806.328µs, 942.415µs, 987.82µs, 1.202ms, 8.503ms +Bytes In [total, mean] 1944000, 162.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-http-oss.png](abrupt-scale-down-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 892.559µs +Latencies [min, mean, 50, 90, 95, 99, max] 444.885µs, 834.074µs, 829.099µs, 964.511µs, 1.014ms, 1.199ms, 16.401ms +Bytes In [total, mean] 1872000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-https-oss.png](abrupt-scale-down-https-oss.png) diff --git a/tests/results/zero-downtime-scale/1.5.0/1.5.0-plus.md b/tests/results/zero-downtime-scale/1.5.0/1.5.0-plus.md new file mode 100644 index 0000000000..d627e44b57 --- /dev/null +++ b/tests/results/zero-downtime-scale/1.5.0/1.5.0-plus.md @@ -0,0 +1,286 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: a0126a6435dd4bd69c1a7f48ee15eecb76c68400 +- Date: 2024-11-12T20:33:03Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.30.5-gke.1443001 +- vCPUs per node: 16 +- RAM per node: 65853972Ki +- Max pods per node: 110 +- Zone: us-west2-a +- Instance Type: n2d-standard-16 + +## Summary: + +- Performance seems consistent with previous run. +- No errors seen. + +## One NGF Pod runs per node Test Results + +### Scale Up Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 927.439µs +Latencies [min, mean, 50, 90, 95, 99, max] 420.181µs, 864.343µs, 861.694µs, 992.418µs, 1.041ms, 1.346ms, 16.498ms +Bytes In [total, mean] 4802953, 160.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-http-plus.png](gradual-scale-up-affinity-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 942.062µs +Latencies [min, mean, 50, 90, 95, 99, max] 451.26µs, 895.9µs, 888.141µs, 1.027ms, 1.081ms, 1.382ms, 15.963ms +Bytes In [total, mean] 4622983, 154.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-https-plus.png](gradual-scale-up-affinity-https-plus.png) + +### Scale Down Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 888.599µs +Latencies [min, mean, 50, 90, 95, 99, max] 434.571µs, 884.752µs, 880.769µs, 1.012ms, 1.059ms, 1.286ms, 36.001ms +Bytes In [total, mean] 7396950, 154.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-https-plus.png](gradual-scale-down-affinity-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 904.607µs +Latencies [min, mean, 50, 90, 95, 99, max] 422.524µs, 858.888µs, 859.399µs, 983.835µs, 1.028ms, 1.232ms, 15.636ms +Bytes In [total, mean] 7684939, 160.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-http-plus.png](gradual-scale-down-affinity-http-plus.png) + +### Scale Up Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 961.958µs +Latencies [min, mean, 50, 90, 95, 99, max] 428.25µs, 844.074µs, 844.033µs, 968.817µs, 1.015ms, 1.158ms, 13.262ms +Bytes In [total, mean] 1921172, 160.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-http-plus.png](abrupt-scale-up-affinity-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 932.432µs +Latencies [min, mean, 50, 90, 95, 99, max] 459.621µs, 880.442µs, 875.374µs, 1.007ms, 1.052ms, 1.245ms, 14.154ms +Bytes In [total, mean] 1849183, 154.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-https-plus.png](abrupt-scale-up-affinity-https-plus.png) + +### Scale Down Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.119ms +Latencies [min, mean, 50, 90, 95, 99, max] 456.468µs, 889.426µs, 886.269µs, 1.02ms, 1.066ms, 1.196ms, 12.822ms +Bytes In [total, mean] 1849242, 154.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-https-plus.png](abrupt-scale-down-affinity-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.166ms +Latencies [min, mean, 50, 90, 95, 99, max] 430.594µs, 860.363µs, 861.391µs, 987.362µs, 1.031ms, 1.172ms, 36.835ms +Bytes In [total, mean] 1921215, 160.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-http-plus.png](abrupt-scale-down-affinity-http-plus.png) + +## Multiple NGF Pods run per node Test Results + +### Scale Up Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 887.923µs +Latencies [min, mean, 50, 90, 95, 99, max] 422.524µs, 863.032µs, 854.987µs, 981.216µs, 1.035ms, 1.42ms, 14.641ms +Bytes In [total, mean] 4803052, 160.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-http-plus.png](gradual-scale-up-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 884.104µs +Latencies [min, mean, 50, 90, 95, 99, max] 434.937µs, 893.783µs, 880.034µs, 1.017ms, 1.075ms, 1.463ms, 17.21ms +Bytes In [total, mean] 4623014, 154.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-https-plus.png](gradual-scale-up-https-plus.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 553.229µs +Latencies [min, mean, 50, 90, 95, 99, max] 399.301µs, 864.16µs, 863.475µs, 986.889µs, 1.031ms, 1.295ms, 14.378ms +Bytes In [total, mean] 15369570, 160.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-http-plus.png](gradual-scale-down-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 862.416µs +Latencies [min, mean, 50, 90, 95, 99, max] 445.97µs, 891.672µs, 885.493µs, 1.016ms, 1.066ms, 1.334ms, 16.925ms +Bytes In [total, mean] 14793548, 154.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-https-plus.png](gradual-scale-down-https-plus.png) + +### Scale Up Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 777.73µs +Latencies [min, mean, 50, 90, 95, 99, max] 450.354µs, 861.791µs, 864.699µs, 986.387µs, 1.028ms, 1.219ms, 5.972ms +Bytes In [total, mean] 1921249, 160.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-http-plus.png](abrupt-scale-up-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 993.42µs +Latencies [min, mean, 50, 90, 95, 99, max] 475.984µs, 900.595µs, 894.769µs, 1.027ms, 1.079ms, 1.274ms, 15.436ms +Bytes In [total, mean] 1849201, 154.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-https-plus.png](abrupt-scale-up-https-plus.png) + +### Scale Down Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 906.108µs +Latencies [min, mean, 50, 90, 95, 99, max] 450.262µs, 908.6µs, 904.442µs, 1.044ms, 1.095ms, 1.236ms, 36.213ms +Bytes In [total, mean] 1921185, 160.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-http-plus.png](abrupt-scale-down-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.093ms +Latencies [min, mean, 50, 90, 95, 99, max] 486.039µs, 929.675µs, 925.652µs, 1.07ms, 1.119ms, 1.262ms, 8.774ms +Bytes In [total, mean] 1849230, 154.10 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-https-plus.png](abrupt-scale-down-https-plus.png) diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-http-oss.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-http-oss.png new file mode 100644 index 0000000000..108a20c657 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-http-plus.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-http-plus.png new file mode 100644 index 0000000000..c8ab65f6d2 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-https-oss.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-https-oss.png new file mode 100644 index 0000000000..108a20c657 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-https-plus.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-https-plus.png new file mode 100644 index 0000000000..c8ab65f6d2 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-http-oss.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-http-oss.png new file mode 100644 index 0000000000..e0b864e24f Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-http-plus.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-http-plus.png new file mode 100644 index 0000000000..33fe46081e Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-https-oss.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-https-oss.png new file mode 100644 index 0000000000..e0b864e24f Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-https-plus.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-https-plus.png new file mode 100644 index 0000000000..33fe46081e Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-down-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-http-oss.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-http-oss.png new file mode 100644 index 0000000000..cc717c2cf2 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-http-plus.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-http-plus.png new file mode 100644 index 0000000000..5b51467285 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-https-oss.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-https-oss.png new file mode 100644 index 0000000000..cc717c2cf2 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-https-plus.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-https-plus.png new file mode 100644 index 0000000000..5b51467285 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-http-oss.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-http-oss.png new file mode 100644 index 0000000000..82475c288c Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-http-plus.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-http-plus.png new file mode 100644 index 0000000000..3306bb0d93 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-https-oss.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-https-oss.png new file mode 100644 index 0000000000..82475c288c Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-https-plus.png b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-https-plus.png new file mode 100644 index 0000000000..3306bb0d93 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/abrupt-scale-up-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-http-oss.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-http-oss.png new file mode 100644 index 0000000000..9ab0b494df Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-http-plus.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-http-plus.png new file mode 100644 index 0000000000..ef75b9a6c7 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-https-oss.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-https-oss.png new file mode 100644 index 0000000000..9ab0b494df Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-https-plus.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-https-plus.png new file mode 100644 index 0000000000..ef75b9a6c7 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-http-oss.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-http-oss.png new file mode 100644 index 0000000000..9814e8e272 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-http-plus.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-http-plus.png new file mode 100644 index 0000000000..29fde9b899 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-https-oss.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-https-oss.png new file mode 100644 index 0000000000..9814e8e272 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-https-plus.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-https-plus.png new file mode 100644 index 0000000000..29fde9b899 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-down-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-http-oss.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-http-oss.png new file mode 100644 index 0000000000..0a9fb192ca Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-http-plus.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-http-plus.png new file mode 100644 index 0000000000..dfb5c96b85 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-https-oss.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-https-oss.png new file mode 100644 index 0000000000..0a9fb192ca Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-https-plus.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-https-plus.png new file mode 100644 index 0000000000..dfb5c96b85 Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-http-oss.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-http-oss.png new file mode 100644 index 0000000000..84a44a1f4a Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-http-plus.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-http-plus.png new file mode 100644 index 0000000000..1867a2230f Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-https-oss.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-https-oss.png new file mode 100644 index 0000000000..84a44a1f4a Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-https-plus.png b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-https-plus.png new file mode 100644 index 0000000000..1867a2230f Binary files /dev/null and b/tests/results/zero-downtime-scale/1.5.0/gradual-scale-up-https-plus.png differ