Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DNNL] Add bfloat16 type support for dnnl conv2d kernel #11902

Merged
merged 3 commits into from
Jun 28, 2022

Conversation

Qianshui-Jiang
Copy link
Contributor

Added bfloat16 for dnnl conv2d kernel in order to coordinate with bf16/fp32 mixed precision mode,
which implemented ny changing the type of dnnl primitive descriptor & memory descripor accoring to src/dst data format during the declaration.

Test case also modified for both bf16 & fp32.

@Qianshui-Jiang
Copy link
Contributor Author

seems CI was blocked, cc @comaniac @masahi , could you pls help to take a review of this PR?

@driazati
Copy link
Member

@tvm-bot rerun

Copy link
Contributor

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Just a nit.

python/tvm/contrib/dnnl.py Outdated Show resolved Hide resolved
refine the branches.

Co-authored-by: Cody Yu <[email protected]>
@Qianshui-Jiang
Copy link
Contributor Author

LGTM. Just a nit.

thx a lot !

@comaniac comaniac merged commit a063404 into apache:main Jun 28, 2022
@comaniac
Copy link
Contributor

Thanks @Qianshui-Jiang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants