You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given a long quorum queue, if I apply a policy to limit the queue's length (in a real-world scenario, likely with the intention of preventing further queue growth and running out of memory), a significant memory spike occurs to drop the messages above the new threshold. This can easily cause the opposite effect than intended: I run out of memory because I was trying to prevent running out of memory...
In my particular case, it was even "funnier": I had a cluster on Kubernetes, applied a policy, the leader was OOMkilled, a new leader was elected and tried to apply the policy, so it was OOMkilled. The remaining node survived because a leader could not be elected, but as soon as one of the nodes restarted, a leader was elected and OOMkilled. A policy that was meant to limit memory usage, cause an OOMkill-loop. :)
Reproduction steps
make run-broker (tested on main)
Publish a significant number of messages: perf-test -qq -u qq -x 4 -y 0 -c 100 -s 5000 -ms -C 1250000
Apply a policy that sets the limit to a low value: rabbitmqctl set_policy max qq '{"max-length": 1234}'
Observe memory usage
Expected behavior
Ideally there should be no significant spike when dropping messages.
Additional context
No response
The text was updated successfully, but these errors were encountered:
michaelklishin
changed the title
Quorum queues: memory spike when applying a max-length policy to a long queue
Quorum queues: memory spike when applying a max-length policy retroactively to a long queue
Oct 29, 2024
Describe the bug
Given a long quorum queue, if I apply a policy to limit the queue's length (in a real-world scenario, likely with the intention of preventing further queue growth and running out of memory), a significant memory spike occurs to drop the messages above the new threshold. This can easily cause the opposite effect than intended: I run out of memory because I was trying to prevent running out of memory...
In my particular case, it was even "funnier": I had a cluster on Kubernetes, applied a policy, the leader was OOMkilled, a new leader was elected and tried to apply the policy, so it was OOMkilled. The remaining node survived because a leader could not be elected, but as soon as one of the nodes restarted, a leader was elected and OOMkilled. A policy that was meant to limit memory usage, cause an OOMkill-loop. :)
Reproduction steps
make run-broker
(tested onmain
)perf-test -qq -u qq -x 4 -y 0 -c 100 -s 5000 -ms -C 1250000
rabbitmqctl set_policy max qq '{"max-length": 1234}'
Expected behavior
Ideally there should be no significant spike when dropping messages.
Additional context
No response
The text was updated successfully, but these errors were encountered: