-
-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow separate *_keys for execution/enqueue/rate limits #753
Comments
@pgvsalamander thank you for the proposal! I'll need to think about it a bit more. I want to throw out a thought seed: instead of moving towards more specific keys, is there a more generic data structure we could use? I'm wondering if jobs had more generic |
I like the idea, although there might be some performance concerns depending on the implementation. How would the tags be used? The only way that comes to mind would be applying a type to each tag so they're a type-value pair, and then applying the different type(s) to the different concurrency controls. Each job would receive a tag representing the job name ( I think that'd require an additional More thought seeds - How are multiple tags of the same type, or missing tags handled? Are they allowed at all, or are you required to define a specific set of tags on all jobs / all jobs of a given type? Are the tags only a thing on the GoodJob backend, and they're displayed as separate pre-defined field types on the user-side? |
What
I think the ability to set separate job-identification keys would be useful, especially if/when ratelimiting is implemented. The separate keys would be optional, defaulting to the value for
key
if a value is not provided / a lambda evaluates tonil
.Why
My use case here is handling multiple jobs that interact with the same database entities, guarding against duplicate jobs and write-collisions.
The enqueue_key would be set using the job identifier and the entity identifier. This prevents duplicate jobs from being queued (same job + entity), while still allowing multiple jobs that reference the same entity to be queued (job A + entity 1, job B + entity 1).
The execution_key would be set using just the entity identifier, which prevents multiple jobs from trying to modify the same entity at a time (job A + entity 1 and job B + entity 1 can both exist, but cannot run simultaneously).
If/when ratelimiting is implemented it would be another layer on top of this, allowing the previous behavior while ensuring that every job with the ratelimit_key doesn't exceed the set limit.
How
I think the biggest part of this work would be accomplished by switching
.where(concurrency_key: ...)
to.where(execution_key: ...)
in/lib/good_job_active_job_extensions/concurrency.rb:95
, and.where(concurrency_key: ...)
to.where(enqueue_key: ...)
on lines 53, 55 of the same file. The aforementioned columns and properties would also need to be created, but those are more straightforward.If I'm on the right track with these changes and if you think they're a good fit for the project I'd be happy to try them out and put a MR in.
Tangentially related issues:
The text was updated successfully, but these errors were encountered: