-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BYOC-DNNL] Support DNNL optimal layout #10421
Conversation
57e24e5
to
56ef1a1
Compare
56ef1a1
to
a9314df
Compare
329dcab
to
0190376
Compare
// infer weight's shape for group convolution | ||
wshape = {{param->groups, indexdiv(param->channels, param->groups), | ||
indexdiv(dshape_nchw[1], param->groups), param->kernel_size[0], | ||
param->kernel_size[1]}}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure we already support group convolution and this one looks unusual. If this is DNNL specific, can you come up with a better name than is_group
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DNNL prefer to use GOIHWxg
layout for group conv's weights. It seems that the layout cannot be transformed from OIHW
to GOIHWxg
directly. Because the dims are not matched for this two layout. Have this kind of transformation already been supported? If not, how about use is_dnnl_group_conv
instead of is_group
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think such transform is possible now. Yeah, is_dnnl_group_conv
sounds better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I will change the name :)
0190376
to
0357162
Compare
* enable dnnl optimal layout for supported ops * verfied cv models with onednnv1.7 * rebase to the latest main branch * fix format related comments * remove unnecessary layout transformation * change deconv into conv_transpose * rename some variables and functions * simplify query_layout * add checkes for query_layout * fix lint * move partition_for_dnnl from dnnl.py to test_dnnl.py * remove unnecessary model test * add more dnnl layout * rename flag in convolution.cc * enhance dnnl layout
* enable dnnl optimal layout for supported ops * verfied cv models with onednnv1.7 * rebase to the latest main branch * fix format related comments * remove unnecessary layout transformation * change deconv into conv_transpose * rename some variables and functions * simplify query_layout * add checkes for query_layout * fix lint * move partition_for_dnnl from dnnl.py to test_dnnl.py * remove unnecessary model test * add more dnnl layout * rename flag in convolution.cc * enhance dnnl layout
This PR aims to support BYOC-DNNL run in the dnnl optimal layout. There are two changes needed to be noticed.
GOIHW
first, so that it can run in optimal dnnl layout like HWOIG16g. Changes inConvolution.cc
is needed to enable group conv run in GOIHW layout.get_optimal_layout_for_conv
andget_optimal_layout_for_deconv
functions are registered intvm.relay.contrib
to query the optimal dnnl layout.The related test cases has been added as well.