Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add pass rate option to pytest #654

Closed
pytestbot opened this issue Jan 7, 2015 · 4 comments
Closed

add pass rate option to pytest #654

pytestbot opened this issue Jan 7, 2015 · 4 comments
Labels
type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature

Comments

@pytestbot
Copy link
Contributor

Originally reported by: Sorin Sbarnea (BitBucket: sorin, GitHub: sorin)


When you are doing testing at a big scale, and more than just unittesting, like integration testing you will definitely end-up having test that are broken for a long time, tests that to fail almost randomly (flapping) and you will probably never be able to obtain a 100% pass rate.

This means that pytest will always fail and you will not be able to pass this stage.

As a side effect the development team will give less importance to this as it always RED.

If we would have a pass rate, this would enable to make the result as a success even if we have some failures.

This would allow you to start CI process with a low pass rate and to slowly increase it over time.

It seems that currently pytest does not allow this, or I just don't know how to obtain this behaviour.


@pytestbot
Copy link
Contributor Author

Original comment by Sorin Sbarnea (BitBucket: sorin, GitHub: sorin):


I forgot to mention that this would be a requirement for using a test-driven development, where you may write a test that replicates a bug, so it will fail until the bug is fixed. As you know, it takes a lot of time to fix some bugs, and others do never get fixed.

@pytestbot
Copy link
Contributor Author

Original comment by Floris Bruynooghe (BitBucket: flub, GitHub: flub):


Hi,

I think you are after the expected-failures mechanisms, it allows you to mark tests you expect to fail. Whether they then fail or pass does not make the entire test-suite fail. You can then still get a test summary about them by using the -rxX options. See http://pytest.org/latest/skipping.html for how to mark tests as xfail.

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


I agree with @flub and add that it might be useful to do a PR to allow for "external" marking, i.e. not modify source code to apply a marker but rather specify a file with "xfailing" tests. I guess it's even doable as a plugin. If you want to head for that, drop a note and we'll see if we can help along with writing it.

@pytestbot
Copy link
Contributor Author

Original comment by Sorin Sbarnea (BitBucket: sorin, GitHub: sorin):


While this is not urgent and I would not mind writing a plugin (if possible), I would like to state that this is about having a general "pass rate" value. Let me explain how I see it working:

--pass-rate=0.95 # defaults to 1.00 (100%) if less py.test will return success if the ration of passed tests is above this value.

Example: you have 100 tests, and 3 will fail. By default (pass rate=100%) pytest should return failure (error code != 0). If you put a pass-rate of 0.95 (95%) it will pass because the result of running 100 tests and getting 3 failures means 97% pass.

The most important thing is that, when used, this must modify the result code returned by the execution of py.test. All the CI tools are relying on getting a SUCCESS error code (0), that's the catch.

I know that this could be implemented using a wrapper that analyses the results, but this would be quite ugly and will break execution of py.test in many scenarios.

This has nothing to do with a specific test, or specific suite, is only about the global execution. It is very similar with the coverage, where a similar concept is used.

I know that for smaller projects, considering ok to fail a percent of the tests may sound as a bad practice, but if you try to scale this you do have to accept something like this.

@pytestbot pytestbot added the type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature label Jun 15, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature
Projects
None yet
Development

No branches or pull requests

1 participant