-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Proton][Dialect] Add Initial Frontend and Target Backend Infrastructure For Proton Dialect #5506
Conversation
cc: @fywkevin |
include/triton/Conversion/TritonGPUToLLVM/PatternTritonGPUOpToLLVM.h
Outdated
Show resolved
Hide resolved
void populateProtonPatterns(TritonGPUTypeConverter &typeConverter, | ||
RewritePatternSet &patterns) { | ||
MLIRContext *context = patterns.getContext(); | ||
patterns.add<GenericOpPattern<triton::proton::RecordOp>>(typeConverter, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be taken out if we don't consider the frontend changes for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we don't need any conversion here anyway because proton's inputs are scalars but not tensors. Could you please double check? @CRobeck
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are scalars but could envision a case where we pass in tensors in the future - if we want to pass in a Triton tensor object for example for object specific tracing. I don't think the conversion hurts anything here but acknowledge it could potentially be misleading.
You're right though that I can remove this entire function (populateProtonPatterns) and the test still passes.
python/src/ir.cc
Outdated
@@ -1603,6 +1606,10 @@ void init_triton_ir(py::module &&m) { | |||
llvm::StringRef(prefix)); | |||
self.create<PrintOp>(prefixAttr, hex, values, isSigned); | |||
}) | |||
.def("create_proton_record", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's take the python api changes out of this PR. It's not determined yet and probably not very important in the short term. Users, like @pawelszczerbuk, can modify the GPU IR to instrument record ops
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is mostly just to have a template in place for the future Proton front end development to interact with the upper level Triton Python kernel code - we can always modify the interface later but how the Proton Python frontend fit together with the Triton IR builder wasn't super intuitive at first so having this here I think is helpful for the moment even if we ultimately change the API in the future. It also gives us some more testing avenues. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The concern I have is that we don't have any real functionalities associated with these record ops anyway. Usually, we add a frontend op when there's at least some functionalities implemented. For example, the most recent tl.gather
op. Though its initial commit doesn't address all problems and concerns, frontend users are able to use the op in their kernels but not just placeholders. So IMHO it makes more sense to add the op, and mark it as experimental
, after the lowering has been implemented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about putting things in a dev
module with some warnings about them being under development and not intentioned for use outside the development team:
Then that gives us an easy method to develop/test in a way that turning them "on" is just a matter of moving them from the dev
module to the experimental
namespace?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's much better now!
Let's just move it under proton.language
without using the dev class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I think create_proton_record
can be moved to triton_proton.cc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to be under proton.language.
Also I think create_proton_record can be moved to triton_proton.cc
I think this would cause a lot of code disruption. TritonOpBuilder is a unique ptr wrapper around the MLIR OpBuilder. So I think no matter what we'd have to find way to pass that object to triton_proton.cc either through moving the definition of TritonOpBuilder out of ir.cc into a header file or possibly through a helper function defined in ir.cc and called from triton_proton.cc which seems like an equal or more amount of complexity to the existing code.
Maybe we can just invoke the generic MLIR OpBuilder from triton_proton.cc? But then I think we'd be mixing and matching builder objects in the code generator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get it now. Let's move it to the last line of all ops and leave a comment:
// proton ops
.def("create_proton_record",
...
It's a problem for all third party backends that they cannot define custom ops. We will have to think about a better solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated.
It's a problem for all third party backends that they cannot define custom ops. We will have to think about a better solution.
Right, there has to be an upper level Triton Op that the backends can specialize but I could not find a way to have a wholly custom third party backend Op. Given that TritonOpBuilder is defined in ir.cc the only solution I see is to pull that class/object out into a standalone module that can be passed to each backend so that they all operate on the same MLIR builder instance.
python/src/ir.cc
Outdated
@@ -1603,6 +1606,10 @@ void init_triton_ir(py::module &&m) { | |||
llvm::StringRef(prefix)); | |||
self.create<PrintOp>(prefixAttr, hex, values, isSigned); | |||
}) | |||
.def("create_proton_record", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's much better now!
Let's just move it under proton.language
without using the dev class.
python/src/ir.cc
Outdated
@@ -1603,6 +1606,10 @@ void init_triton_ir(py::module &&m) { | |||
llvm::StringRef(prefix)); | |||
self.create<PrintOp>(prefixAttr, hex, values, isSigned); | |||
}) | |||
.def("create_proton_record", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I think create_proton_record
can be moved to triton_proton.cc
void populateProtonPatterns(TritonGPUTypeConverter &typeConverter, | ||
RewritePatternSet &patterns) { | ||
MLIRContext *context = patterns.getContext(); | ||
patterns.add<GenericOpPattern<triton::proton::RecordOp>>(typeConverter, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we don't need any conversion here anyway because proton's inputs are scalars but not tensors. Could you please double check? @CRobeck
python/src/ir.cc
Outdated
@@ -1603,6 +1606,10 @@ void init_triton_ir(py::module &&m) { | |||
llvm::StringRef(prefix)); | |||
self.create<PrintOp>(prefixAttr, hex, values, isSigned); | |||
}) | |||
.def("create_proton_record", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get it now. Let's move it to the last line of all ops and leave a comment:
// proton ops
.def("create_proton_record",
...
It's a problem for all third party backends that they cannot define custom ops. We will have to think about a better solution.
It looks good to me now mostly. Do you want to postpone merging into main a bit until Yuanwei has taken a look? |
Yes, that's fine. I don't think should be anything too controversial. |
Implement initial basic infrastructure for the Proton Dialect added in #5119
This PR extends the initial boilerplate MLIR Dialect code to the Triton frontend and target backends - currently just lowered to a no-op.