Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Unittests] Added a meta-test for tvm.testing.fixture behavior in case of a broken fixture. #8343

Merged
merged 1 commit into from
Jun 30, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 36 additions & 1 deletion tests/python/unittest/test_tvm_testing_features.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,10 @@
# This file tests features in tvm.testing, such as verifying that
# cached fixtures are run an appropriate number of times. As a
# result, the order of the tests is important. Use of --last-failed
# or --failed-first while debugging this file is not advised.
# or --failed-first while debugging this file is not advised. If
# these tests are distributed/parallelized using pytest-xdist or
# similar, all tests in this file should run sequentially on the same
# node. (See https://stackoverflow.com/a/59504228)


class TestTargetAutoParametrization:
Expand Down Expand Up @@ -145,5 +148,37 @@ def test_cached_count(self):
assert self.cached_calls == len(self.param1_vals)


class TestBrokenFixture:
# Tests that use a fixture that throws an exception fail, and are
# marked as setup failures. The tests themselves are never run.
# This behavior should be the same whether or not the fixture
# results are cached.

num_uses_broken_uncached_fixture = 0
num_uses_broken_cached_fixture = 0

@tvm.testing.fixture
def broken_uncached_fixture(self):
raise RuntimeError("Intentionally broken fixture")

@pytest.mark.xfail(True, reason="Broken fixtures should result in a failing setup", strict=True)
def test_uses_broken_uncached_fixture(self, broken_uncached_fixture):
type(self).num_uses_broken_fixture += 1

def test_num_uses_uncached(self):
assert self.num_uses_broken_uncached_fixture == 0

@tvm.testing.fixture(cache_return_value=True)
def broken_cached_fixture(self):
raise RuntimeError("Intentionally broken fixture")

@pytest.mark.xfail(True, reason="Broken fixtures should result in a failing setup", strict=True)
def test_uses_broken_cached_fixture(self, broken_cached_fixture):
type(self).num_uses_broken_cached_fixture += 1

def test_num_uses_cached(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we parallelize testing, will this inter-test dependency work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, it will not. We'd need to also add a call to pytest_xdist_make_scheduler (example stack overflow post) in order to force these tests to be run in order on a single node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in our conversation yesterday, I've added a comment regarding pytest-xdist and the link to the above stackoverflow post to this PR, so that it will be easier to find if/when we parallelize the testing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems like a reasonably easy thing to do. thanks @Lunderberg !

assert self.num_uses_broken_cached_fixture == 0


if __name__ == "__main__":
sys.exit(pytest.main(sys.argv))