-
-
Notifications
You must be signed in to change notification settings - Fork 314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Be more robust in how regression data are discovered #821
Be more robust in how regression data are discovered #821
Conversation
Avoid creating an empty array when test_open does not find the expected existing array.
Regardless of the test runner working directory, ensure that this test fixture returns the path to the relevant regression data directory, which lives under the project root.
Release notes follow shortly. |
@joshmoore, does this pass muster, as far as you're concerned? I'm not sure if I have correctly formatted the release notes. |
As I'm a first-time contributor, the CI workflows won't run without maintainer approval. Would a kind maintainer mind enabling them? |
Done
Looks great! |
Codecov Report
@@ Coverage Diff @@
## master #821 +/- ##
=======================================
Coverage 99.94% 99.94%
=======================================
Files 31 31
Lines 10604 10606 +2
=======================================
+ Hits 10598 10600 +2
Misses 6 6
|
I'm obviously unable to claim that all the CI actions have passed, nor that the project test coverage is 100%, but is this necessary here? The diff coverage is 100% and the missing project coverage is in an unrelated part of |
Nope. This has been blocking my PRs as well. Don't worry. I've been focused on trying to fix conda and havedn't had time to fix codecov ;) |
Good good. Thanks for the opportunity to tinker! |
As an aside, off topic for this PR, I wonder if you could cheat Codecov by replacing the last three entries in the tuple of fixture parameters, "fs_nested",
"fs_flat",
"fs_default" with needs_fsspec = pytest.mark.skipif(not have_fsspec, reason="needs fsspec")
...
pytest.param("fs_nested", marks=needs_fsspec),
pytest.param("fs_flat", marks=needs_fsspec),
pytest.param("fs_default", marks=needs_fsspec) and then remove the Of course, it doesn't actually change anything (Codecov would still be right to complain that the CI doesn't test in the absence of |
;) Glad to have other tinkerers. A heads up that in an effort to not keep abusing releases in order to test conda-forge, I'm running locally with this diff to zarr-feedstock:
Do you have any idea why I'm not seeing the failures locally? |
I imagine so. Happy to see that as well, which removes the need for my pragma in #822 |
What is your working directory when running Pytest? You will only see the failures on |
Want me to add a PR for that too? |
If you'd like, sure! |
NB: Merging in #822 has gotten this green. 👍 |
Yes. |
I think if you run Pytest from another directory, with something like
instead of
then you'll start to see the failures. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @benjaminhwilliams. Looking forward to future PRs. 😉
Be stricter about where and how regression data are discovered, during testing. This fixes some tests that are failing during the Conda-forge release process.
As previously mentioned in #819 (comment):
zarr.tests.test_dim_separator.test_open
, open an array in read-only mode, to prevent accidental creation of an empty array when the expected array does not exist, which would leave droppings in the test runner working directory.TODO: