-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove randomized startup delays #3075
Conversation
08b5aca
to
e3bdf65
Compare
On initial cluster formation, only one node in a multi node cluster should initialize the Mnesia database schema (i.e. form the cluster). To ensure that for nodes starting up in parallel, RabbitMQ peer discovery backends have used either locks or randomized startup delays. Locks work great: When a node holds the lock, it either starts a new blank node (if there is no other node in the cluster), or it joins an existing node. This makes it impossible to have two nodes forming the cluster at the same time. Consul and etcd peer discovery backends use locks. The lock is acquired in the consul and etcd infrastructure, respectively. For other peer discovery backends (classic, DNS, AWS), randomized startup delays were used. They work good enough in most cases. However, in rabbitmq/cluster-operator#662 we observed that in 1% - 10% of the cases (the more nodes or the smaller the randomized startup delay range, the higher the chances), two nodes decide to form the cluster. That's bad since it will end up in a single Erlang cluster, but in two RabbitMQ clusters. Even worse, no obvious alert got triggered or error message logged. To solve this issue, one could increase the randomized startup delay range from e.g. 0m - 1m to 0m - 3m. However, this makes initial cluster formation very slow since it will take up to 3 minutes until every node is ready. In rare cases, we still end up with two nodes forming the cluster. Another way to solve the problem is to name a dedicated node to be the seed node (forming the cluster). This was explored in rabbitmq/cluster-operator#689 and works well. Two minor downsides to this approach are: 1. If the seed node never becomes available, the whole cluster won't be formed (which is okay), and 2. it doesn't integrate with existing dynamic peer discovery backends (e.g. K8s, AWS) since nodes are not yet known at deploy time. In this commit, we take a better approach: We remove randomized startup delays altogether. We replace them with locks. However, instead of implementing our own lock implementation in an external system (e.g. in K8s), we re-use Erlang's locking mechanism global:set_lock/3. global:set_lock/3 has some convenient properties: 1. It accepts a list of nodes to set the lock on. 2. The nodes in that list connect to each other (i.e. create an Erlang cluster). 3. The method is synchronous with a timeout (number of retries). It blocks until the lock becomes available. 4. If a process that holds a lock dies, or the node goes down, the lock held by the process is deleted. The list of nodes passed to global:set_lock/3 corresponds to the nodes the peer discovery backend discovers (lists). Two special cases worth mentioning: 1. That list can be all desired nodes in the cluster (e.g. in classic peer discovery where nodes are known at deploy time) while only a subset of nodes is available. In that case, global:set_lock/3 still sets the lock not blocking until all nodes can be connected to. This is good since nodes might start sequentially (non-parallel). 2. In dynamic peer discovery backends (e.g. K8s, AWS), this list can be just a subset of desired nodes since nodes might not startup in parallel. That's also not a problem as long as the following requirement is met: "The peer disovery backend does not list two disjoint sets of nodes (on different nodes) at the same time." For example, in a 2-node cluster, the peer discovery backend must not list only node 1 on node 1 and only node 2 on node 2. Existing peer discovery backends fullfil that requirement because the resource the nodes are discovered from is global. For example, in K8s, once node 1 is part of the Endpoints object, it will be returned on both node 1 and node 2. Likewise, in AWS, once node 1 started, the described list of instances with a specific tag will include node 1 when the AWS peer discovery backend runs on node 1 or node 2. Removing randomized startup delays also makes cluster formation considerably faster (up to 1 minute faster if that was the upper bound in the range).
39de499
to
0876746
Compare
{undefined, undefined} -> | ||
ok; | ||
_ -> | ||
cuttlefish:warn("cluster_formation.randomized_startup_delay_range.min and " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FTR, Cuttlefish schema translation happens so early that this won't be visible in the regular or prelaunch log, at least that what I observed. This is a general problem with logging at such really early node boot stages, we shouldn't try to address it in this PR (or even for 3.9).
The point is that existing config files with these options will pass validation 👍
Merging into master (will backport after
|
This is the doc change for rabbitmq/rabbitmq-server#3075. In K8s, the cluster-operator nowadays uses Parallel (instead of OrderedReady) pod management policy. Therefore, we delete some sentences on recommending OrderedReady. There is no need to document the new config value `cluster_formation.internal_lock_retries` since it's too much implementation detail for this doc and we don't expect users to change this value.
This is the doc change for rabbitmq/rabbitmq-server#3075. In K8s, the cluster-operator nowadays uses Parallel (instead of OrderedReady) pod management policy. Therefore, we delete some sentences on recommending OrderedReady. There is no need to document the new config value `cluster_formation.internal_lock_retries` since it's too much implementation detail for this doc and we don't expect users to change this value.
Remove randomized startup delays (cherry picked from commit ed0ba6a) Conflicts: deps/rabbit/src/rabbit_mnesia.erl deps/rabbit/src/rabbit_peer_discovery.erl deps/rabbit/test/peer_discovery_classic_config_SUITE.erl deps/rabbitmq_peer_discovery_k8s/src/rabbit_peer_discovery_k8s.erl
Backported to |
This is the doc change for rabbitmq/rabbitmq-server#3075. In K8s, the cluster-operator nowadays uses Parallel (instead of OrderedReady) pod management policy. Therefore, we delete some sentences on recommending OrderedReady. There is no need to document the new config value `cluster_formation.internal_lock_retries` since it's too much implementation detail for this doc and we don't expect users to change this value.
Since we bumped the minimum supported RabbitMQ version to v3.9.0 in #1110, we can delete the deprecated `cluster_formation.randomized_startup_delay_range` configurations. See rabbitmq/rabbitmq-server#3075. Prior to this commit, the RabbitMQ logs contained the following warning: ``` 2022-08-15 08:18:03.870480+00:00 [warn] <0.130.0> cluster_formation.randomized_startup_delay_range.min and cluster_formation.randomized_startup_delay_range.max are deprecated ```
Since we bumped the minimum supported RabbitMQ version to v3.9.0 in #1110, we can delete the deprecated `cluster_formation.randomized_startup_delay_range` configurations. See rabbitmq/rabbitmq-server#3075. Prior to this commit, the RabbitMQ logs contained the following warning: ``` 2022-08-15 08:18:03.870480+00:00 [warn] <0.130.0> cluster_formation.randomized_startup_delay_range.min and cluster_formation.randomized_startup_delay_range.max are deprecated ```
On initial cluster formation, only one node in a multi node cluster
should initialize the Mnesia database schema (i.e. form the cluster).
To ensure that for nodes starting up in parallel,
RabbitMQ peer discovery backends have used
either locks or randomized startup delays.
Locks work great: When a node holds the lock, it either starts a new
blank node (if there is no other node in the cluster), or it joins
an existing node. This makes it impossible to have two nodes forming
the cluster at the same time.
Consul and etcd peer discovery backends use locks. The lock is acquired
in the consul and etcd infrastructure, respectively.
For other peer discovery backends (classic, K8s, AWS), randomized
startup delays were used. They work good enough in most cases.
However, in rabbitmq/cluster-operator#662 we
observed that in 1% - 10% of the cases (the more nodes or the
smaller the randomized startup delay range, the higher the chances), two
nodes decide to form the cluster. That's bad since it will end up in a
single Erlang cluster, but in two RabbitMQ clusters. Even worse, no
obvious alert got triggered or error message logged.
To solve this issue, one could increase the randomized startup delay
range from e.g. 0m - 1m to 0m - 3m. However, this makes initial cluster
formation very slow since it will take up to 3 minutes until
every node is ready. In rare cases, we still end up with two nodes
forming the cluster.
Another way to solve the problem is to name a dedicated node to be the
seed node (forming the cluster). This was explored in
rabbitmq/cluster-operator#689 and works well.
Two minor downsides to this approach are: 1. If the seed node never
becomes available, the whole cluster won't be formed (which is okay),
and 2. it doesn't integrate with existing dynamic peer discovery backends
(e.g. K8s, AWS) since nodes are not yet known at deploy time.
In this PR, we take a better approach: We remove randomized startup
delays altogether. We replace them with locks. However, instead of
implementing our own lock implementation in an external system (e.g. in K8s),
we re-use Erlang's locking mechanism global:set_lock/3.
global:set_lock/3 has some convenient properties:
cluster).
blocks until the lock becomes available.
held by the process is deleted.
The list of nodes passed to global:set_lock/3 corresponds to the nodes
the peer discovery backend discovers (lists).
Two special cases worth mentioning:
That list can be all desired nodes in the cluster
(e.g. in classic peer discovery where nodes are known at
deploy time) while only a subset of nodes is available.
In that case, global:set_lock/3 still sets the lock not
blocking until all nodes can be connected to. This is good since
nodes might start sequentially (non-parallel).
In dynamic peer discovery backends (e.g. K8s, AWS), this
list can be just a subset of desired nodes since nodes might not startup
in parallel. That's also not a problem as long as the following
requirement is met: "The peer discovery backend does not list two disjoint
sets of nodes (on different nodes) at the same time."
For example, in a 2-node cluster, the peer discovery backend must not
list only node 1 on node 1 and only node 2 on node 2.
Existing peer discovery backends fulfil that requirement because the
resource the nodes are discovered from is global.
For example, in K8s, once node 1 is part of the Endpoints object, it
will be returned on both node 1 and node 2.
Likewise, in AWS, once node 1 started, the described list of instances
with a specific tag will include node 1 when the AWS peer discovery backend
runs on node 1 or node 2.
Removing randomized startup delays also makes cluster formation
considerably faster (up to 1 minute faster if that was the
upper bound in the range).
How these changes were tested
K8s peer discovery backend:
Deployed cluster-operator v1.7.0 on a GKE cluster.
rabbitmq.yml
Deployed 100 9-node RabbitMQ clusters in sequence checking that all 9 nodes got clustered and checking that there were no container restarts (thanks @mkuratczyk):
The following script also stores all the pod logs
Classic peer discovery backend:
Same as K8s peer discovery backend but using https://github.com/ansd/cluster-operator/tree/peer-discovery-classic
AWS peer discovery backend:
mykey: myvalue
.On all 3 instances, do the following:
ssh ubuntu@<public IP> -i <pem file>
sudo ln -s /usr/bin/python3 /usr/bin/python
make run-broker TEST_TMPDIR="/home/ubuntu/rabbitmq-server/tmp/test" RABBITMQ_CONFIG_FILE="/home/ubuntu/rabbitmq-server/tmp/rabbitmq.conf" RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-setcookie mycookie"
rabbitmq.conf:
TODOs