Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark the setup as flaky #135

Closed
alex4200 opened this issue Oct 4, 2018 · 7 comments
Closed

Mark the setup as flaky #135

alex4200 opened this issue Oct 4, 2018 · 7 comments

Comments

@alex4200
Copy link

alex4200 commented Oct 4, 2018

I am running complex selenium tests, in which the setup is flaky sometimes, i.e. when I get the webdriver itself.

I tried to mark the respective setup function as flaky, but if an error happens in the setup method (in conftest.py in the py.test framework), the test is not being repeated.

Is there a way to mark the setup for a test as 'flaky'?

@Jeff-Meadows
Copy link
Contributor

Sorry, this isn't supported out of the box, but it could be possible. Can you give some more details about your test setup?

@alex4200
Copy link
Author

alex4200 commented Oct 8, 2018

Hi,

in these selenium tests of mine I am testing non-public, password-protected webpages. In order to create meaningful test cases I absorbed the login and the opening of the initial page into the startup of the tests.

However, the start up can fail in many ways:

  • An intermittent geckodriver error when I try to get the selenium driver
  • An intermittent error when trying to log in
  • An intermittent error when opening the webpage

And when such an error happens in the startup phase of the test, the test is not being repeated, even the 'flaky' decorator is used.

And for these cases it would be good to have these tests repeated as well...

@JonathanRRogers
Copy link
Contributor

I have exactly the same use case as alex4200 and have modified the pytest plugin to re-run tests that fail in setup.

@mentaal
Copy link

mentaal commented Sep 16, 2019

I too would very much like to see this feature or perhaps something very similar. I am currently using pytest to test a piece of hardware. In my case, a lot of tests initialize the hardware through a particular pytest fixture. If there is a hardware bug, it can manifest in many testcases and result in an error which can be completely unrelated to what the testcases are trying to test.

In this case what I am trying to do with pytest (which initially brought me to this plugin) is to have a scheme where all of the tests I have in my suite that depend on this fixture are ran and if a test fails for a particular reason, (determined by a provided function much like what what is done with the filter function you support,) then the test is rerun a few more times.

In my case, its a bit different because the hardware bug need not manifest as part of test setup only and could manifest during the test as well. Because a single bug can affect a lot of tests, I would like to be able to apply the retesting approach across the board, without having to decorate each test/class/module etc. Or to rephrase, to associate this retesting strategy with a fixture as opposed to a test.

@jacebrowning
Copy link
Contributor

https://github.com/pytest-dev/pytest-rerunfailures works in a very similar way to this plugin, but will rerun a test when the flakiness is caused by one of the test's fixtures.

@adamtheturtle
Copy link

Can this now be closed thanks to #148?

@Jeff-Meadows
Copy link
Contributor

Fixed in #148

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants