-
-
Notifications
You must be signed in to change notification settings - Fork 382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Losing coverage of Hypothesis-strategy-called code #317
Comments
Does the coverage problem have any relation to this ticket? As to whether hypothesis is OK for coverage testing, I assume that you mean that it’s possible that hypothesis will not generate test data that exercises certain code paths. If that’s the case, it’s possible that the unit tests aren’t specific enough, but you can also ensure that some data of the sort you need is always included by using the |
@wsanchez, I don't think it's directly related. In this case I specifically observed that code called by a strategy (or drawing from a strategy?) did not trigger coverage but code called from inside a test did count towards coverage. The commit I linked (altendky/attrs@da4512b) had full coverage both before and after. The non-Hypothesis test If I remember correctly, it was @DRMacIver and @hynek that expressed some amount of uncertainty about using Hypothesis for coverage. My expectation was like yours, I think. Pick strategies that are guaranteed to achieve coverage. If at some point you lose coverage based on a test run triggered by an unrelated commit then you will get a surprising coverage error and will have to diagnose it and correct it. It seems that will just be the way it goes unless you fully avoid recording any Hypothesis-test-called coverage info. Unless maybe you can run |
We ended up removing the coverage features in Hypothesis 3.71 (HypothesisWorks/hypothesis#1564), because they were fairly brittle and did not interact well with the rest of the ecosystem - and with only a hundred or so runs, targeted exploration is out-performed by faster execution. |
Is this still an issue given the prior comment? |
I'll have to read some more later. I'm not clear what was removed. I can't say I know for sure but my impression was that hypothesis added a thing that explicitly ended up blocking coverage from working soon after 3.28.3... I guess a closure of this ticket (if/when it happens) should maybe include an explicit statement that I don't know that |
OK, swapped Bug for Thinking. |
Hypothesis 3.29 added some coverage-based searching, which turned out to be a net loss partly on the grounds that it interfered with test suite coverage measurement. It was therefore removed in 3.71 more than a year ago, and will not return. So unless you're pinning to a very old version (please don't!), this issue can just be closed. |
Ok, I read through https://hypothesis.readthedocs.io/en/latest/changes.html#v3-29-0 now. So I guess the loss of coverage reporting for code exercised by drawing from a strategy was a side effect of that effort?
Because of this it seems maybe useful to have an explicit decision of what is wanted. I (and I think @wsanchez) both are fine with expecting hypothesis-triggered code (outside the body of a test function) being considered as part of coverage results. We explicitly design and pick the strategies to achieve functionality coverage, I'm not sure why code coverage has a higher bar. But, @hynek seemed to have a different opinion and it seems that what we can expect to get from Hypothesis now is not inline with @hynek's preference (per my recollection of and notes about a discussion from two whatever years ago so... yeah). The coverage loss was addressed by adding a test. I suppose it could be removed now but both those points are sort of separate from this issue. This issue is to decide what |
Hm I think we'll just roll with the punches. I'm trying to test nothing by accident but if the coverage ever drops under 100% we'll cope with it. |
Fun fact: I just had a coverage fail on our ReadOnlyDict which has zero tests as I've noticed, so there's my next todo item. 🤪 |
Worth noting that Hypothesis no longer messes with coverage information, so this is unrelated to the original loss of coverage on Hypothesis based tests - you may well have been getting that coverage from usage in strategies. |
Ha yes totally. I forgot there was more going on. I had just this dim memory that there was something about coverage from hypothesis. |
This is a followup from discussions in #280.
As of Hypothesis 3.29.0 in September, coverage data is no longer collected for code called from strategies. It seems to have changed in the first commit or two after 3.28.3 (HypothesisWorks/hypothesis@6e7a478 HypothesisWorks/hypothesis@d75ac34). This was done intentonially to reduce the run time of tests.
This was identified after adjusting handling of unspecified metadata in
attr.ib()
(#278). The tests had no coverage reported for the case of metadata being specified despite that being the case many times during a test run (hundreds?). A little test was added to satisfy the coverage report.attrs/tests/test_make.py
Lines 848 to 858 in b3861d1
This does the job, so to speak, but it seems a bit odd to have hundreds of instances where metadata is being specified and only one reporting coverage. In no particular order, here are a few options.
attr.ib()
calls instead of having them actually make theattr.ib()
calls as they do now.There is some concern about using Hypothesis driven testing for coverage reports since Hypothesis can be inconsistent (I think that is what has been said). I am not sure why it's ok to trust Hypothesis will be consistent enough for functionality checks but not for coverage checks. With this issue being my first involvement with Hypothesis, perhaps I'm missing some of the implications and techniques of using it.
The text was updated successfully, but these errors were encountered: