-
Notifications
You must be signed in to change notification settings - Fork 12.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Mlir] --linalg-specialize-generic-ops crashes in Casting.h:566 #122094
Comments
@llvm/issue-subscribers-mlir Author: None (Emilyaxe)
git version: e4372c4
system: reproduce with: a.mlir:
stack trace:
|
@Emilyaxe: Please do not forget to add |
ok, I will add labels! |
@llvm/issue-subscribers-mlir-linalg Author: None (Emilyaxe)
git version: e4372c4
system: reproduce with: a.mlir:
stack trace:
|
A smaller example is (sorry if this can be made smaller): #map0 = affine_map<(i, j) -> (i, j)>
#map1 = affine_map<(i, j) -> ()>
module {
func.func @test() -> tensor<2x2xi32> {
// Two 2x2 tensor inputs.
%lhs = arith.constant dense<2> : tensor<2x2xi32>
%rhs = arith.constant dense<3> : tensor<2x2xi32>
// A scalar i32 input (the problem input).
%c0 = arith.constant 42 : i32
%init = tensor.empty() : tensor<2x2xi32>
%res =
linalg.generic
{ indexing_maps = [#map0, #map0, #map1, #map0],
iterator_types = ["parallel", "parallel"] }
// pass in scalar here
ins(%lhs, %rhs, %c0 : tensor<2x2xi32>, tensor<2x2xi32>, i32)
outs(%init : tensor<2x2xi32>)
{
^bb0(%x: i32, %y: i32, %scalar: i32, %out: i32):
%add = arith.addi %x, %y : i32
%sum = arith.addi %add, %scalar : i32
linalg.yield %sum : i32
} -> tensor<2x2xi32>
// Return the computed tensor.
return %res : tensor<2x2xi32>
}
} The issue here being that #map0 = affine_map<(i, j) -> (i, j)>
#map1 = affine_map<(i, j) -> ()>
module {
func.func @test() -> tensor<2x2xi32> {
// Two 2x2 tensor inputs.
%lhs = arith.constant dense<2> : tensor<2x2xi32>
%rhs = arith.constant dense<3> : tensor<2x2xi32>
// A scalar i32 input.
%c0 = arith.constant 42 : i32
// create tensor
%ct0 = tensor.from_elements %c0 : tensor<i32>
// ...
ins(%lhs, %rhs, %ct0 : tensor<2x2xi32>, tensor<2x2xi32>, tensor<i32>)
// ...
}
} I was having a look through the code and this validates the operands: // LinalgInterfaces.cpp
for (OpOperand &opOperand : linalgOp->getOpOperands()) {
AffineMap indexingMap = linalgOp.getMatchingIndexingMap(&opOperand);
// Symbols disallowed.
if (indexingMap.getNumSymbols() != 0)
return op->emitOpError("unexpected symbols in indexing_map #")
<< opOperand.getOperandNumber();
// Domain must be consistent.
unsigned numLoops = linalgOp.getNumLoops();
if (indexingMap.getNumDims() != numLoops)
return op->emitOpError("expected indexing_map #")
<< opOperand.getOperandNumber() << " to have " << numLoops
<< " dim(s) to match the number of loops";
int64_t rank = linalgOp.getRank(&opOperand);
if (indexingMap.getNumResults() != rank)
return op->emitOpError("expected operand rank (")
<< rank << ") to match the result rank of indexing_map #"
<< opOperand.getOperandNumber() << " ("
<< indexingMap.getNumResults() << ")";
} The issue is here the // DecomposeGenericByUnfoldingPermutation.cpp
if (llvm::any_of(op->getOpOperands(), [](OpOperand &oper) {
auto opType = cast<RankedTensorType>(oper.get().getType());
return ShapedType::isDynamicShape(opType.getShape());
}))
return failure(); It attempts to cast I am new to MLIR so not sure what the best cause of action would be, but where the checks are done in |
I think we should add extra check for RankedTensorType operand. CC @javedabsar1 |
@CoTinker could you assign this to me and I will add the check? |
git version: e4372c4
system:
Ubuntu 18.04.6 LTS
reproduce with:
mlir-opt a.mlir --linalg-specialize-generic-ops
a.mlir:
stack trace:
The text was updated successfully, but these errors were encountered: