Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make inference test for chlo.broadcast_select_reify less brittle #1874

Closed

Conversation

mlevesquedion
Copy link
Contributor

@mlevesquedion mlevesquedion commented Dec 7, 2023

In llvm/llvm-project#74438, the folder for shape.shape_of is changed to a canonicalizer. This means that constant shapes no longer get folded automatically (--canonicalize must be used). This will cause a test failure when we do the next LLVM integrate, because the broadcast_select_reify test expects the shape.shape_of operation to be folded into shape.const_shape. The test also expects the constant shape value to be the rightmost arg of the shape.broadcast operation, which will not be the case if canonicalization is not applied.

Additional context:

In llvm/llvm-project#74438, the folder for `shape.shape_of` is changed to a canonicalizer. This means that constant shapes no longer get folded automatically (`--canonicalize` must be used). This will cause a test failure when we do the next LLVM integrate, because the `broadcast_select_reify` test expects the `shape.shape_of` operation to be folded into `shape.const_shape`. The test also expects the constant shape value to be pushed to the rightmost arg of the `shape.broadcast` operation, which will not be the case if canonicalization is not applied.

Additional context:
- The old folder for `shape.shape_of` returned its input shape as a
  tensor attribute, so it would [automatically get materialized](https://mlir.llvm.org/docs/Canonicalization/#generating-constants-from-attributes) [to a `shape.const_shape` op](https://github.com/llvm/llvm-project/blob/98d8dce6e9e21a995f6a06fa4485fa529931be37/mlir/lib/Dialect/Shape/IR/Shape.cpp#L154-L156).
- The new canonicalizer does the materialization explicitly.
- [BroadcastOp is Commutative](https://github.com/llvm/llvm-project/blob/7ddd3d776402f9cc7d5f13b5940ba38a285223c2/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td#L57) and [ConstShape is ConstantLike](https://github.com/llvm/llvm-project/blob/7ddd3d776402f9cc7d5f13b5940ba38a285223c2/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td#L105), so
  if `shape.shape_of` is folded to `shape.const_shape`, the resulting
  value becomes the rightmost argument to `shape.broadcast`. Indeed,
  according to the docs [constant arguments of commutative ops are
  shifted to the
  right](https://mlir.llvm.org/docs/Canonicalization/#globally-applied-rules:~:text=Move%20constant%20operands%20to%20commutative%20operators%20to%20the%20right%20side), and this is implemented [here](https://github.com/llvm/llvm-project/blob/7ddd3d776402f9cc7d5f13b5940ba38a285223c2/mlir/lib/IR/Operation.cpp#L802).
// CHECK-NEXT: %3 = shape.broadcast %1, %2, %0 : tensor<1xindex>, tensor<1xindex>, tensor<1xindex> -> tensor<1xindex>
// CHECK-DAG: %[[ARG0_S:.+]] = shape
// CHECK-DAG: %[[ARG1_S:.+]] = shape
// CHECK-DAG: %[[ARG2_S:.+]] = shape
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make sure this test captures the important pieces of computeNaryElementwiseBroadcastingResultExtents - namely, one shape_of op per operand, and that the rank of the shape.broadcast op is as expected.

With that said I'd suggest we modify these tests to look for:

  // CHECK:      %[[ARG0_S:.+]] = shape.shape_of %arg0
  // CHECK-NEXT: %[[ARG1_S:.+]] = shape.shape_of %arg1
  // CHECK-NEXT: %[[ARG2_S:.+]] = shape.shape_of %arg2
  // CHECK-NEXT: %3 = shape.broadcast %[[ARG0_S]], %[[ARG1_S]], %[[ARG2_S]] : {{.*}} -> tensor<1xindex>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds fine to me, but I'm worried that if they change the way folding happens again then we will need to change this again. Currently I can't apply this change because of the folding.

I was trying to test the rank through the return value, but I agree it's more direct to do it on the broadcast.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. I see what you're saying now. Hmm, this is because we use return builder.createOrFold<shape::ShapeOfOp>(loc, v); instead of just create. Meaning as it's written, the fold semantics of shape op should be a part of this test unfortunately.. I.e. if fold changed in a way that broke this lowering, we'd want to know.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. This will happen in the next LLVM integrate: they removed the folding so we will get shape_of now. Should I just drop this change then? (Or maybe we can keep it open until the integrate in case we need to refer to it.)

mlevesquedion pushed a commit to mlevesquedion/stablehlo that referenced this pull request Dec 11, 2023
This includes a fix to the `infer_chlo.mlir` test, as this test is
affected by folding behavior of the `shape_of` op, and this behavior was
recently changed: openxla#1874
mlevesquedion pushed a commit to mlevesquedion/stablehlo that referenced this pull request Dec 11, 2023
This includes a fix to the `infer_chlo.mlir` test, as this test is
affected by folding behavior of the `shape_of` op, and this behavior was
recently changed: openxla#1874
mlevesquedion pushed a commit that referenced this pull request Dec 11, 2023
This includes a fix to the `infer_chlo.mlir` test, as this test is
affected by folding behavior of the `shape_of` op, and this behavior was
recently changed (see #1874 for
details).
@mlevesquedion
Copy link
Contributor Author

This was superseded by #1880.

@mlevesquedion mlevesquedion deleted the update-test-not-require-fold branch December 11, 2023 23:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants