-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Optimize fetching samples logic #26060
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #26060 +/- ##
==========================================
+ Coverage 69.08% 69.10% +0.01%
==========================================
Files 1941 1940 -1
Lines 75892 75869 -23
Branches 8443 8443
==========================================
- Hits 52431 52426 -5
+ Misses 21286 21268 -18
Partials 2175 2175
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
(cherry picked from commit bd8951e)
SUMMARY
Not a fix per se to #25995, but an alteration based on #25995 (comment). Per this code block it seems like my hypothesis was correct, i.e., though a cache key is known a priori it's only materialized when the query is successful and thus in our case if the
COUNT(*)
succeeds but the sample data query fails we should be clearing the cache for the former rather than the later.BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF
TESTING INSTRUCTIONS
CI.
ADDITIONAL INFORMATION