-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try converting all inner joins to filters #13201
Conversation
Generic question, when we say "Try converting all inner joins to filters", does this include the case where the join would return a billion (or 100 billion) rows? That is, are we considering cardinality? Or, are we doing this only for cases where we know the cardinality must be low (such as for inline tables)? In a traditional RDBMS, one looks at the (estimated) cardinality of the tables to determine if this kind of conversion is safe. Set some threshold: 100 items? 1000? 10K? If the estimated cardinality is above that, then the memory-for-time tradeoff doesn't work and we're better off retaining the join. If the code can't figure this out (Druid doesn't really do cost estimation), how can a user force the join choice? New keyword? Query hint in SQL? Query context entry? |
@paul-rogers : Thanks for the suggestion and explanation! :) Yes, we already do have a configurable limit (default 10k elements) on cardinality for this conversion. If the number of elements in the join table are above that limit, we don't convert the join as a filter and retain it as is. If the value is less than the threshold, then we convert the join to an |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is essentially reordering joins. (The old code, because it only picked off joins from the start of the clause list, did not reorder.)
I'm wondering if the new code correctly handles the verification of required columns. When we reorder clauses, we need to make sure that the reorder did not push a clause too far to the left, where it wouldn't be able to see its inputs anymore. Consider the case of two clauses, where the second clause uses output of the first in its condition, and where the second is otherwise convertible to filter but the first is not fully convertible. Is this handled properly? Do we have a test for this case?
There's another potential issue I'm wondering about, related to reordering. If one of the earlier clauses is |
@rohangarg, thanks for the explanation about the cutoff quantity. That raises another question: how do we know the number? If a join is against an entire table, one can just look at the table cardinality (the sum of the row counts across all segments). If there is a filter on a table, estimating the post-filter row count is a difficult challenge -- one that all query planners wrestle with (and for which Calcite provides a cost-based algorithm.) Is this PR only considering the entire table cardinality? |
Are we using Calcite to do the join reordering? If so, then it would be surprising if Calcite would allow invalid reordering. If we're doing it ourselves, then we do need to be careful: there are lots of tricky rules, such as the one Gian mentioned, for when join ordering is or is not allowed. Basically, if we're doing it ourselves, only reorder "plain" inner joins and we won't go wrong (though we may miss optimization opportunities.) Later, perhaps we can feed Calcite some proper cost estimates (however crude) and Calcite can do the task for us. |
@gianm :
In this case, by 'output of the join' do you mean the right table column? If so, then the second join would not be fully converted. Both joins would stay intact and the derived/inferred filters from them would be pushed to the left side.
Thanks for catching that, you're right that fully converting inner joins after right/outer joins would lead to incorrect results since they could contain NULLs. Maybe we could keep those joins intact and also push the filter - but that can be taken up later upon more thinking. For now, I'll stop the clause conversion loop as soon as we see a 'righty' join. And will also write a test for this case.
Currently, we apply the join-to-filter optimization when the right/build side table has been materialized fully in memory, so it means that we know the cardinality of that table.
No, we're not using the calcite's join reordering rule as of now. The current optimization can be thought of more as adaptive execution where after running all the necessary queries and building the right side table, we decide whether to push some inferred filters on the left side before running the join. Yes, you're right - currently we've decided to only do the filter inferencing for inner joins amongst a group of inner and left outer joins (and that too only if the inferred filter is on left table columns) and leave out right/outer joins. Sure, yes in future as more join support/requirement comes over we could also use the Calcite's rule in determining the join algorithm and the order. Thanks a lot @gianm and @paul-rogers for your thorough review and suggestions! |
Conflicts: processing/src/test/java/org/apache/druid/segment/join/JoinableFactoryWrapperTest.java
Facing #13289 in CI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with the latest changes.
Thanks a lot for the review @gianm and @paul-rogers |
Recently, support for partial pushdown of joins (having duplicates or output referencing columns) as filters was added. Still currently in a list of joins, we stop converting joins to filters as soon as one of them doesn't convert fully to a filter. This change allows for all joins to covert to filters fully or partially maintaining that :
This PR has: