Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use dedicated capsule for processing sidekiq-alive queue #96

Merged
merged 2 commits into from
Sep 6, 2023

Conversation

andrcuns
Copy link
Collaborator

This introduces a separate capsule for handling sidekiq alive queue and makes it sequential because we don't really need to process multiple alive jobs in parallel.

  • I think it improves the design by not mutating "default" capsule
  • Uses capsule local connection pool
  • It should solve an issue I encountered in my own application, more details, more context here: https://gitlab.com/dependabot-gitlab/dependabot/-/issues/293#note_1408148243. TLDR; basically if an app fills in all threads that are configured for sidekiq and those jobs are long running, we can end up where alive job is not picked up for long enough and start reporting app as unhealthy.

@andrcuns andrcuns added the enhancement New feature or request label Aug 30, 2023
@@ -28,7 +28,7 @@ def delete(key)
end

def ttl(...)
Sidekiq.redis { |redis| redis.ttl(...) }
redis { |r| r.ttl(...) }
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated both implementations to work same way


def redis(&block)
# Default to Sidekiq.redis if capsule is not configured yet but redis adapter is accessed
(@capsule || Sidekiq).redis(&block)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sidekiq documentation states that capsule specific redis pool should be used. We need this fallback because method returning redis adapter is a class method and can be accessed before actual configuration has been performed (it is used in tests like that currently).


before do
allow(SidekiqAlive).to(receive(:fork) { 1 })
allow(sq_config).to(receive(:on).with(:startup) { |&arg| arg.call })

SidekiqAlive.instance_variable_set(:@redis, nil)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

redis adapter is memoized on the class level and because it is created in tests, we reset it just to make sure that everything still works correctly for scenario where full configuration cycle was performed and adapter should return capsule specific redis pool

@andrcuns
Copy link
Collaborator Author

@arturictus Could You please review this one and let me know what You think?

@andrcuns
Copy link
Collaborator Author

Did some testing in my app, and it seems we do need more than 1 connection in a pool. This is because we can end up calling unregister_current_instance for example concurrently.

I ended up getting ConnectionPool::TimeoutError: Waited 1 sec, 0/1 available error with just one connection in the pool

@arturictus
Copy link
Owner

This looks great!, thanks @andrcuns for this improvement.

@andrcuns andrcuns merged commit 46ceb55 into master Sep 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants