You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I frequently have to xfail one case out of a whole bunch of parametrized test cases, e.g.:
@pytest.mark.parametrize('box', [pd.Index, pd.Series, pd.DataFrame])
@pytest.mark.parametrize('other', [3.14, np.float64(3.14), np.array([3.14])])
@pytest.mark.parametrize('moon', ["Full", "New", "Blue"])
def test_thing(self, box, other, moon):
if box is pd.DataFrame and isinstance(other, np.ndarray) and moon == "Blue":
pytest.xfail(reason="...")
I would like to be able to make this xfail "strict". AFAIK the only way to do this would be to explicitly write out all the parameter combinations and xfail the appropriate one:
But that would get really verbose and defeat the point of a really nice pytest feature.
Would it be feasible to make pytest.xfail(reason="...", strict=True, raises=ValueError) not immediately raise, but instead continue the test and behave as if it had been xfailed from outside? (Is the idea clear?)
The text was updated successfully, but these errors were encountered:
I frequently have to xfail one case out of a whole bunch of parametrized test cases, e.g.:
I would like to be able to make this xfail "strict". AFAIK the only way to do this would be to explicitly write out all the parameter combinations and xfail the appropriate one:
But that would get really verbose and defeat the point of a really nice pytest feature.
Would it be feasible to make
pytest.xfail(reason="...", strict=True, raises=ValueError)
not immediately raise, but instead continue the test and behave as if it had been xfailed from outside? (Is the idea clear?)The text was updated successfully, but these errors were encountered: