-
Notifications
You must be signed in to change notification settings - Fork 12.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir][tosa] Check for 0-ranked-tensors during fold #68512
Conversation
✅ With the latest revision this PR passed the C/C++ code formatter. |
4a9fc20
to
958b387
Compare
8079711
to
4531a8c
Compare
Looking again at the spec it says that reduce and reverse operators need to take as input tensor of rank 1 or higher. So the most appropriate solution here really is to change the TOSA operator specification to accept |
Where can I find the specs? |
Apologies for not adding it in my response. Here is the specification https://www.mlplatform.org/tosa/tosa_spec.html |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this change is good. @GeorgeARM is right that the spec doesn't support rank 0 right now, but we're trying to review support of rank0 and we might change the restrictions on reductions to allow rank0 to occur with result == input as you do with this fold.
Just the one request for a better test name. Thanks!
|
||
// ----- | ||
|
||
// CHECK-LABEL: @rank_zero |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make a change to make the name more descriptive? Maybe something like fold_reduce_rank_zero or something similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I renamed it to fold_reduce_rank_zero
.
// CHECK-LABEL: @rank_zero | ||
func.func nested @rank_zero() { | ||
%0 = tensor.empty() : tensor<i32> | ||
%1 = tosa.reduce_min %0 {axis = 0 : i32} : (tensor<i32>) -> tensor<1x10xi32> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also worth adding a CHECK-NOT
label and ensure that the reduce_min
and reverse
ops are not present after the canonicalization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the CHECK-NOT
labels for both the ops.
Trying getDimSize() before checking for 0-ranked-tensors throws assert errors. Fixes llvm#67761
4531a8c
to
16321cf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, looks good to me.
@gptsarthak was trying to merge the commit but I see the following |
@GeorgeARM I don't understand what the issue is. Where do you see it? |
That is what you get if you have told GitHub to 'keep private email'. When someone goes to merge the change, GitHub will remove the email address from the commit and substitute something like the above. There is no existing policy in LLVM on this as far as I'm aware. |
I have made my email public. It should no longer be an issue. |
Fixes #67761
Trying
getDimSize()
before checking for 0-ranked-tensors throws assert errors. This PR ensures that it is checked for.Or should we throw an error if we have a 0-ranked-tensor in a tosa operation?