-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AMP] refine AMP and the corresponding tests for bfloat16 #12787
Conversation
@tvm-bot rerun |
Thanks for the patch, Youlei! I found a bunch of statement like "op->dtype.is_float() || op->dtype.is_bfloat16()" in tvm folder Shall we simply add new float type definition in tvm/include/tvm/runtime/data_type.h to eliminate those statements?
|
@billishyahao |
@masahi Could you help to review this? Thanks. |
* refine AMP for bfloat16 * refine AMP tests to cover bfloat16 * refine accuracy checking for dnnl bf16
* refine AMP for bfloat16 * refine AMP tests to cover bfloat16 * refine accuracy checking for dnnl bf16
This PR fixes issue #12763, where some OP are marked to keep the original dtype but some of its input is
bfloat16
while aCast
is missing.The AMP tests have also been refined to cover
bfloat16
without accuracy checking.Update:
The accuracy checking in
test_dnnl.py
of bf16 vs fp32 is unstable and error-prone. Thus the accuracy checking is ignored if only one bf16 result present, i.e. only compare bf16 vs bf16 and fp32 vs fp32.