Skip to content

Commit

Permalink
new reference architecture:
Browse files Browse the repository at this point in the history
  • Loading branch information
georgewallace committed Dec 4, 2024
1 parent e14d748 commit 06aa69e
Show file tree
Hide file tree
Showing 9 changed files with 201 additions and 0 deletions.
160 changes: 160 additions & 0 deletions docs/en/reference-architectures/hot-frozen.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
[[hot-frozen-architecture]]
== Hot/Frozen - High Availability

The Hot/Frozen High Availability architecture is cost optimized for large time-series datasets.
In this architecture, the hot tier is primarily used for indexing, searching, and continuity for automated processes.
https://www.elastic.co/guide/en/elasticsearch/reference/current/searchable-snapshots.html[Searchable snapshots] are taken from hot into a repository, such as a cloud object store or an on-premises shared filesystem, and then cached to any desired volume on the local disks of the frozen tier.
Data in the repository is indexed for fast retrieval and accessed on-demand from the frozen nodes.
Index and snapshot lifecycle management are used to automate this process.

This architecture is ideal for time-series use cases, such as Observability or Security, that do not require updating.
All the necessary components of the {stack} are included.
This is not intended for sizing workloads, but rather as a basis to ensure that your cluster is ready to handle any desired workload with resiliency.
A very high level representation of data flow is included, and for more detail around ingest architecture see our {ingest-guide}/use-case-arch.html[ingest architectures] documentation.

[discrete]
[[hot-frozen-use-case]]
=== Hot/Frozen use case

This Hot/Frozen – High Availability architecture is intended for organizations that:

* Have a requirement for cost effective long term data storage (many months or years).
* Provide insights and alerts using logs, metrics, traces, or various event types to ensure optimal performance and quick issue resolution for applications.
* Apply https://www.elastic.co/guide/en/kibana/current/xpack-ml-anomalies.html[machine learning anomaly detection] to help detect patterns in time series data to find root cause and resolve problems faster.
* Use an AI assistant (https://www.elastic.co/guide/en/observability/current/obs-ai-assistant.html[Observability], https://www.elastic.co/guide/en/security/current/security-assistant.html[Security], or https://www.elastic.co/guide/en/kibana/current/playground.html[Playground]) for investigation, incident response, reporting, query generation, or query conversion from other languages using natural language.
* Deploy an architecture model that allows for maximum flexibility between storage cost and performance.

[IMPORTANT]
====
**Automated operations that frequently read large data volumes require both high availability (replicas) and predictable low latency (hot, warm or cold tier).**
* Common examples of these tasks include look-back windows on security detection/alert rules, transforms, machine learning jobs, or watches; and long running scroll queries or external extract processes.
* These operations should be completed before moving the data into a frozen tier.
====

[discrete]
[[hot-frozen-architecture-diagram]]
=== Architecture

image::images/hot-frozen.png["A Hot/Frozen Highly available architecture"]

TIP: We use an Availability Zone (AZ) concept in the architecture above.
When running in your own Data Center (DC) you can equate AZs to failure zones within a datacenter, racks, or even separate physical machines depending on your constraints.

The diagram illustrates an {es} cluster deployed across 3 availability zones (AZ). For production we recommend a minimum of 2 availability zones and 3 availability zones for mission critical applications. See https://www.elastic.co/guide/en/cloud/current/ec-planning.html[Plan for production] for more details. A cluster that is running in {ecloud} that has data nodes in only two AZs will create a third master-eligible node in a third AZ. True real-time high availability cannot be achieved without three zones for any distributed computing technology.

The number of data nodes shown for each tier (hot and frozen) is illustrative and would be scaled up depending on ingest volume and retention period. Hot nodes contain both primary and replica shards. By default, primary and replica shards are always guaranteed to be in different availability zones in {ess}, but when self-deploying https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-cluster.html#shard-allocation-awareness[shard allocation awareness] would need to be configured. Frozen nodes act as a large high-speed cache and retrieve data from the snapshot store as needed.

Machine learning nodes are optional but highly recommended for large scale time series use cases since the amount of data quickly becomes too difficult to analyze. Applying techniques such as machine learning based anomaly detection or Search AI with large language models helps to dramatically speed up problem identification and resolution.

[discrete]
[[hot-frozen-hardware]]
=== Recommended hardware specifications

With {ecloud} you can deploy clusters in AWS, Azure, and Google Cloud. Available hardware types and configurations vary across all three cloud providers but each provides instance types that meet our recommendations for the node types used in this architecture. For more details on these instance types, see our documentation on {ecloud} hardware for https://www.elastic.co/guide/en/cloud/current/ec-default-aws-configurations.html[AWS], https://www.elastic.co/guide/en/cloud/current/ec-default-azure-configurations.html[Azure], and https://www.elastic.co/guide/en/cloud/current/ec-default-gcp-configurations.html[GCP]. The **Physical** column below is guidance, based on the cloud node types, when self-deploying {es} in your own data center.

In the links provided above, Elastic has performance tested hardware for each of the cloud providers to find the optimal hardware for each node type. We use ratios to represent the best mix of CPU, RAM, and disk for each type. In some cases the CPU to RAM ratio is key, in others the disk to memory ratio and type of disk is critical. Significantly deviating from these ratios may seem like a way to save on hardware costs, but may result in an {es} cluster that does not scale and perform well.

This table shows our specific recommendations for nodes in a Hot/Frozen architecture.

|===
| **Type** | **AWS** | **Azure** | **GCP** | **Physical**
|image:images/hot.png["Hot data node"] |
c6gd |
f32sv2|


N2|
16-32 vCPU +
64 GB RAM +
2-6 TB NVMe SSD

|image:images/frozen.png["Frozen data node"]
|
i3en
|
e8dsv4
|
N2|
8 vCPU +
64 GB RAM +
6-20+ TB NVMe SSD +
Depending on days cached
|image:images/machine-learning.png["Machine learning node"]
|
m6gd
|
f16sv2
|
N2|
16 vCPU +
64 GB RAM +
256 GB SSD
|image:images/master.png["Master node"]
|
c5d
|
f16sv2
|
N2|
8 vCPU +
16 GB RAM +
256 GB SSD
|image:images/kibana.png["Kibana node"]
|
c6gd
|
f16sv2
|
N2|
8-16 vCPU +
8 GB RAM +
256 GB SSD
|===

[discrete]
[[hot-frozen-considerations]]
=== Important considerations


**Updating data:**

* Typically, time series logging use cases are append-only and there is rarely a need to update documents. The frozen tier is read-only.

**Multi-AZ frozen tier:**

* Three availability zones is ideal, but at least two availability zones are recommended to ensure that there will be data nodes available in the event of an AZ failure.

**Shard management:**

* The most important foundational step to maintaining performance as you scale is proper shard management. This includes even shard distribution amongst nodes, shard size, and shard count. For a complete understanding of what shards are and how they should be used, refer to https://www.elastic.co/guide/en/elasticsearch/reference/current/size-your-shards.html[Size your shards].

**Snapshots:**

* If auditable or business critical events are being logged, a backup is necessary. The choice to back up data will depend on each individual business's needs and requirements. Refer to our https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshots-register-repository.html[snapshot repository] documentation to learn more.
* To automate snapshots and attach to Index lifecycle management policies, refer to https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshots-take-snapshot.html#automate-snapshots-slm[SLM (Snapshot lifecycle management)].

**Kibana:**

* If self-deploying outside of {ess}, ensure that {kib} is configured for https://www.elastic.co/guide/en/kibana/current/production.html#high-availability[high availability].

[discrete]
[[hot-frozen-estimate]]
=== How many nodes of each do you need?
It depends on:

* The type of data being ingested (such as logs, metrics, traces)
* The retention period of searchable data (such as 30 days, 90 days, 1 year)
* The amount of data you need to ingest each day

You can https://www.elastic.co/contact[contact us] for an estimate and recommended configuration based on your specific scenario.

[discrete]
[[hot-frozen-resources]]
=== Resources and references

* https://www.elastic.co/guide/en/elasticsearch/reference/current/scalability.html[{es} - Get ready for production]

* https://www.elastic.co/guide/en/cloud/current/ec-prepare-production.html[{ecloud} - Preparing a deployment for production]

* https://www.elastic.co/guide/en/elasticsearch/reference/current/size-your-shards.html[Size your shards]
Binary file added docs/en/reference-architectures/images/frozen.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/reference-architectures/images/hot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/reference-architectures/images/kibana.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en/reference-architectures/images/master.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 6 additions & 0 deletions docs/en/reference-architectures/index.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[[reference-architectures]]
= Reference architectures

include::reference-architectures-overview.asciidoc[]

include::hot-frozen.asciidoc[]
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
[[reference-architectures-overview]]
= Reference architectures

Elasticsearch reference architectures are blueprints for deploying Elasticsearch clusters tailored to different use cases. Whether you're handling logs or metrics these reference architectures focus on scalability, reliability, and efficient resource utilization. Use these guidelines to deploy Elasticsearch for your use case.

These architectures are designed by architects and engineers to provide standardized, proven solutions that help you to follow best practices when deploying {es}. Some of the key areas of focus are listed below.

* High availability
* Scalability

TIP: These architectures are specific to running your deployment on-premises or on cloud. If you are using Elastic serverless your {es} clusters are autoscaled and fully managed by Elastic. For all the deployment options, refer to https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-intro-deploy.html[Run Elasticsearch].

These reference architectures are recommendations and should be adapted to fit your specific environment and needs. Each solution can vary based on the unique requirements and conditions of your deployment. In these architectures we discuss about how to deploy cluster components. For information about designing ingest architectures to feed content into your cluster, refer to https://www.elastic.co/guide/en/ingest/current/use-case-arch.html[Ingest architectures]

[discrete]
[[reference-architectures-time-series-2]]
=== Architectures

[cols="50, 50"]
|===
| *Architecture* | *When to use*
| <<hot-frozen-architecture>>

A high availability architecture that is cost optimized for large time-series datasets.

a|
* Have a requirement for cost effective long term data storage (many months or years).
* Provide insights and alerts using logs, metrics, traces, or various event types to ensure optimal performance and quick issue resolution for applications.
* Apply Machine Learning and Search AI to assist in dealing with the large amount of data.
* Deploy an architecture model that allows for maximum flexibility between storage cost and performance.
| Additional architectures are on the way.

Stay tuned for updates. |

|===

0 comments on commit 06aa69e

Please sign in to comment.