Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposal]Result level cache in Druid at Brokers #4843

Closed
himanshug opened this issue Sep 25, 2017 · 3 comments
Closed

[Proposal]Result level cache in Druid at Brokers #4843

himanshug opened this issue Sep 25, 2017 · 3 comments

Comments

@himanshug
Copy link
Contributor

himanshug commented Sep 25, 2017

Druid currently supports per segments cache for historicals and brokers (often not recommended as it results in historical not being able to perform merges). Even with cache enabled, broker still needs to do all merging.
We have many users that deploy custom caches outside of Druid to cache end query response from brokers in systems like memcache and redis etc. It has been proven beneficial in many different use cases. However, it has been something that requires work for each cluster operator.

So, this proposal is to build a result level cache in Druid using probably the existing Cache interface so that user could opt to store existing cache implementations to store cached data. Implementation would also use the "etag" facility already available in QueryResource to quickly see if query response can be returned immediately from cache or it should be computed.

Considering query results could be very large also, so there would be an upper limit on the size of response that could be cached.

@himanshug
Copy link
Contributor Author

@a2l007 can you please look into this? thanks.

@gianm
Copy link
Contributor

gianm commented Sep 25, 2017

Considering query results could be very large also, so there would be an upper limit on the size of response that could be cached.

If implemented for the regular cache, then we could enable groupBy caching by default, as there would be no concern about blowing through the cache too early.

@leventov
Copy link
Member

Should it be closed now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants