Skip to content

Commit

Permalink
[TEST] Skip test_dot fp16 out_dtype test (triton-lang#17)
Browse files Browse the repository at this point in the history
  • Loading branch information
Jokeren authored Jul 24, 2023
1 parent 0838995 commit ef54be0
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 0 deletions.
1 change: 1 addition & 0 deletions TODO.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,3 +39,4 @@ https://github.com/openai/triton-hopper/blob/1ada046fdaef13f94dc7e2f6e6d0966e5d1
https://github.com/openai/triton-hopper/blob/b6a6b32b0ee79e93247d20c95f15fd75039a40b9/python/triton/compiler/utils.py#L3
* Pipeline doesn't handle block ptrs correctly
* Pipeline doesn't handle TMAs correctly
* `wgmma` doesn't support `out_dtype=f16`
4 changes: 4 additions & 0 deletions python/test/unit/language/test_core.py
Original file line number Diff line number Diff line change
Expand Up @@ -2117,6 +2117,10 @@ def test_dot(M, N, K, num_warps, col_a, col_b, epilogue, allow_tf32, in_dtype, o
if out_dtype == 'float16':
# TODO: support out_dtype=float16 for tl.dot on V100
pytest.skip("Only test out_dtype=float16 on devices with sm >=80")
if capability[0] == 9 and out_dtype == 'float16':
# TODO: support out_dtype=float16 for tl.dot on H100
pytest.skip("Only test out_dtype=float16 on devices with sm<90")


torch.backends.cuda.matmul.allow_tf32 = allow_tf32

Expand Down

0 comments on commit ef54be0

Please sign in to comment.