From 310cb59f8bad554c7a67fc8e7d6bacc7134e689d Mon Sep 17 00:00:00 2001 From: Ruben van Staden Date: Sun, 27 Oct 2024 22:16:28 -0400 Subject: [PATCH 1/7] update GCP throughput number for 8.16.0 --- .../processing-performance.asciidoc | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc index 9db4040411..ecd6db2225 100644 --- a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc +++ b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc @@ -29,47 +29,47 @@ specific setup, the size of APM event data, and the exact number of agents. | *1 GB* (10 agents) -| 9,000 +| - events/second -| 6,000 +| - events/second -| 9,000 +| 17 100 events/second | *4 GB* (30 agents) -| 25,000 +| - events/second -| 18,000 +| - events/second -| 17,000 +| 34 840 events/second | *8 GB* (60 agents) -| 40,000 +| - events/second -| 26,000 +| - events/second -| 25,000 +| 47 630 events/second | *16 GB* (120 agents) -| 72,000 +| - events/second -| 51,000 +| - events/second -| 45,000 +| 90 020 events/second | *32 GB* (240 agents) -| 135,000 +| - events/second -| 95,000 +| - events/second -| 95,000 +| 143 100 events/second |==== From 97f6d538372bbffaf500fa8f79b3a4daf9eaf67e Mon Sep 17 00:00:00 2001 From: Ruben van Staden Date: Thu, 12 Dec 2024 16:59:24 -0500 Subject: [PATCH 2/7] update performance numbers for Azure and AWS for 8.16.0 --- .../processing-performance.asciidoc | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc index ecd6db2225..30e173befc 100644 --- a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc +++ b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc @@ -29,45 +29,45 @@ specific setup, the size of APM event data, and the exact number of agents. | *1 GB* (10 agents) -| - +| 15 180 events/second -| - +| 13 700 events/second | 17 100 events/second | *4 GB* (30 agents) -| - +| 28 890 events/second -| - +| 26 020 events/second | 34 840 events/second | *8 GB* (60 agents) -| - +| 49 660 events/second -| - +| 33 610 events/second | 47 630 events/second | *16 GB* (120 agents) -| - +| 96 120 events/second -| - +| 57 480 events/second | 90 020 events/second | *32 GB* (240 agents) -| - +| 132 800 events/second -| - +| 88 830 events/second | 143 100 events/second From b734629ce8e3359519cc10f521c5b5c5de7c07a6 Mon Sep 17 00:00:00 2001 From: Ruben van Staden Date: Fri, 13 Dec 2024 09:13:07 -0500 Subject: [PATCH 3/7] add exact hardware profile used in benchmarking --- .../apm/troubleshooting/processing-performance.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc index 30e173befc..9fce721e69 100644 --- a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc +++ b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc @@ -7,7 +7,7 @@ agent and server settings, versions, and protocol. We tested several scenarios to help you understand how to size the APM Server so that it can keep up with the load that your Elastic APM agents are sending: -* Using the default hardware template on AWS, GCP and Azure on {ecloud}. +* Using the _CPU Optimized_ hardware template on AWS, GCP and Azure on {ecloud}, see link:https://www.elastic.co/guide/en/cloud/current/ec-configure-deployment-settings.html#ec-hardware-profiles[Hardware Profiles]. * For each hardware template, testing with several sizes: 1 GB, 4 GB, 8 GB, and 32 GB. * For each size, using a fixed number of APM agents: 10 agents for 1 GB, 30 agents for 4 GB, 60 agents for 8 GB, and 240 agents for 32 GB. * In all scenarios, using medium sized events. Events include From 3c9378b21f640f5df50999d1f7faf9b97e54e476 Mon Sep 17 00:00:00 2001 From: Ruben van Staden Date: Fri, 13 Dec 2024 11:48:18 -0500 Subject: [PATCH 4/7] add hardware profile instances used --- .../apm/troubleshooting/processing-performance.asciidoc | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc index 9fce721e69..1002e3d139 100644 --- a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc +++ b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc @@ -7,7 +7,10 @@ agent and server settings, versions, and protocol. We tested several scenarios to help you understand how to size the APM Server so that it can keep up with the load that your Elastic APM agents are sending: -* Using the _CPU Optimized_ hardware template on AWS, GCP and Azure on {ecloud}, see link:https://www.elastic.co/guide/en/cloud/current/ec-configure-deployment-settings.html#ec-hardware-profiles[Hardware Profiles]. +* Using the _CPU Optimized_ hardware template on AWS, GCP and Azure on {ecloud} with the following instances, for more details see link:https://www.elastic.co/guide/en/cloud/current/ec-configure-deployment-settings.html#ec-hardware-profiles[Hardware Profiles]: + ** AWS: c6gd + ** Azure: fsv2 + ** GCP: n2.68x32x45 * For each hardware template, testing with several sizes: 1 GB, 4 GB, 8 GB, and 32 GB. * For each size, using a fixed number of APM agents: 10 agents for 1 GB, 30 agents for 4 GB, 60 agents for 8 GB, and 240 agents for 32 GB. * In all scenarios, using medium sized events. Events include From ec57fdbcbf734ba2b2ca43cc2355b5012f0635b2 Mon Sep 17 00:00:00 2001 From: Ruben van Staden Date: Mon, 6 Jan 2025 20:16:54 -0500 Subject: [PATCH 5/7] resolve reviewer comments --- .../processing-performance.asciidoc | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc index 1002e3d139..be0280b1d8 100644 --- a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc +++ b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc @@ -7,7 +7,7 @@ agent and server settings, versions, and protocol. We tested several scenarios to help you understand how to size the APM Server so that it can keep up with the load that your Elastic APM agents are sending: -* Using the _CPU Optimized_ hardware template on AWS, GCP and Azure on {ecloud} with the following instances, for more details see link:https://www.elastic.co/guide/en/cloud/current/ec-configure-deployment-settings.html#ec-hardware-profiles[Hardware Profiles]: +* Using the _CPU Optimized_ hardware template on AWS, GCP and Azure on {ecloud} with the following instances (for more details see link:https://www.elastic.co/guide/en/cloud/current/ec-configure-deployment-settings.html#ec-hardware-profiles[Hardware Profiles]): ** AWS: c6gd ** Azure: fsv2 ** GCP: n2.68x32x45 @@ -32,47 +32,47 @@ specific setup, the size of APM event data, and the exact number of agents. | *1 GB* (10 agents) -| 15 180 +| 15,180 events/second -| 13 700 +| 13,700 events/second -| 17 100 +| 17,100 events/second | *4 GB* (30 agents) -| 28 890 +| 28,890 events/second -| 26 020 +| 26,020 events/second -| 34 840 +| 34,840 events/second | *8 GB* (60 agents) -| 49 660 +| 49,660 events/second -| 33 610 +| 33,610 events/second -| 47 630 +| 47,630 events/second | *16 GB* (120 agents) -| 96 120 +| 96,120 events/second -| 57 480 +| 57,480 events/second -| 90 020 +| 90,020 events/second | *32 GB* (240 agents) -| 132 800 +| 132,800 events/second -| 88 830 +| 88,830 events/second -| 143 100 +| 143,100 events/second |==== From 84568e3acbd2a2653815a4861e0f58c40318214d Mon Sep 17 00:00:00 2001 From: Ruben van Staden Date: Tue, 7 Jan 2025 09:08:59 -0500 Subject: [PATCH 6/7] round numbers to the nearest 1000 --- .../processing-performance.asciidoc | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc index be0280b1d8..6afa646131 100644 --- a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc +++ b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc @@ -32,47 +32,47 @@ specific setup, the size of APM event data, and the exact number of agents. | *1 GB* (10 agents) -| 15,180 +| 15,000 events/second -| 13,700 +| 14,000 events/second -| 17,100 +| 17,000 events/second | *4 GB* (30 agents) -| 28,890 +| 29,000 events/second -| 26,020 +| 26,000 events/second -| 34,840 +| 35,000 events/second | *8 GB* (60 agents) -| 49,660 +| 50,000 events/second -| 33,610 +| 34,000 events/second -| 47,630 +| 48,000 events/second | *16 GB* (120 agents) -| 96,120 +| 96,000 events/second -| 57,480 +| 57,000 events/second -| 90,020 +| 90,000 events/second | *32 GB* (240 agents) -| 132,800 +| 133,000 events/second -| 88,830 +| 89,000 events/second -| 143,100 +| 143,000 events/second |==== From 74391cd5b0ba1f9859f891f8d59516d722d18734 Mon Sep 17 00:00:00 2001 From: Ruben van Staden Date: Tue, 7 Jan 2025 19:27:01 -0500 Subject: [PATCH 7/7] use representative link --- .../apm/troubleshooting/processing-performance.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc index 6afa646131..136644d208 100644 --- a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc +++ b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc @@ -7,7 +7,7 @@ agent and server settings, versions, and protocol. We tested several scenarios to help you understand how to size the APM Server so that it can keep up with the load that your Elastic APM agents are sending: -* Using the _CPU Optimized_ hardware template on AWS, GCP and Azure on {ecloud} with the following instances (for more details see link:https://www.elastic.co/guide/en/cloud/current/ec-configure-deployment-settings.html#ec-hardware-profiles[Hardware Profiles]): +* Using the _CPU Optimized_ hardware template on AWS, GCP and Azure on {ecloud} with the following instances (for more details see {cloud}/ec-configure-deployment-settings.html#ec-hardware-profiles[Hardware Profiles]): ** AWS: c6gd ** Azure: fsv2 ** GCP: n2.68x32x45