Implement Event Processor load balancing strategy improvements #11768
Labels
blocking-release
Blocks release
Client
This issue points to a problem in the data-plane of the library.
Event Hubs
Milestone
Summary
The current implementation of the load balancer for the
EventProcessor<TPartition>
makes use of a single approach for claiming partition ownership, which is optimized for fair distribution and to avoid "bouncing partitions" as processors starting up would battle for control and steal from one another. This approach is a solid strategy for the majority case, but does not scale well for more advanced scenarios. For example, when there are two thousand partitions, the amount of time needed for active processors to fully distribute them is unreasonably long and leads to an undesirable delay in processing.In order to ensure that the processor can better meet the needs of our varied scenarios, it should support multiple strategies so that applications that use the client can choose the one that best suits their unique needs.
Scope of Work
Enhance the
EventProcessor<TPartition>
load balancer to enable different strategies for claiming partitions, based on the accepted design.Enhance the
EventProcessor<TPartition>
to accept the desired strategy for load balancing as part of its options.Enhance the
EventProcessorClient
to accept the desired strategy for load balancing as part of its options.Out of Scope
Success Criteria
The
EventProcessor<TPartition>
load balancer understands multiple strategies for claiming ownership of partitions to process, and can act on each strategy as appropriate.The processor types accept a load balancing strategy as an option and correctly apply it.
The tests necessary for its validation have been created or adjusted and pass reliably; tests that do not focus on the extension points have been removed and no base functionality is included.
The existing live test suite continues to produce deterministic results and pass reliably.
Related Issues and References
The text was updated successfully, but these errors were encountered: