-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mark the setup as flaky #135
Comments
Sorry, this isn't supported out of the box, but it could be possible. Can you give some more details about your test setup? |
Hi, in these selenium tests of mine I am testing non-public, password-protected webpages. In order to create meaningful test cases I absorbed the login and the opening of the initial page into the startup of the tests. However, the start up can fail in many ways:
And when such an error happens in the startup phase of the test, the test is not being repeated, even the 'flaky' decorator is used. And for these cases it would be good to have these tests repeated as well... |
I have exactly the same use case as alex4200 and have modified the pytest plugin to re-run tests that fail in setup. |
I too would very much like to see this feature or perhaps something very similar. I am currently using pytest to test a piece of hardware. In my case, a lot of tests initialize the hardware through a particular pytest fixture. If there is a hardware bug, it can manifest in many testcases and result in an error which can be completely unrelated to what the testcases are trying to test. In this case what I am trying to do with pytest (which initially brought me to this plugin) is to have a scheme where all of the tests I have in my suite that depend on this fixture are ran and if a test fails for a particular reason, (determined by a provided function much like what what is done with the filter function you support,) then the test is rerun a few more times. In my case, its a bit different because the hardware bug need not manifest as part of test setup only and could manifest during the test as well. Because a single bug can affect a lot of tests, I would like to be able to apply the retesting approach across the board, without having to decorate each test/class/module etc. Or to rephrase, to associate this retesting strategy with a fixture as opposed to a test. |
https://github.com/pytest-dev/pytest-rerunfailures works in a very similar way to this plugin, but will rerun a test when the flakiness is caused by one of the test's fixtures. |
Can this now be closed thanks to #148? |
Fixed in #148 |
I am running complex selenium tests, in which the setup is flaky sometimes, i.e. when I get the webdriver itself.
I tried to mark the respective setup function as flaky, but if an error happens in the setup method (in
conftest.py
in thepy.test
framework), the test is not being repeated.Is there a way to mark the setup for a test as 'flaky'?
The text was updated successfully, but these errors were encountered: