Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests/periph_timer_short_relative_set results are misleading #18511

Closed
chrysn opened this issue Aug 24, 2022 · 1 comment · Fixed by #18513
Closed

tests/periph_timer_short_relative_set results are misleading #18511

chrysn opened this issue Aug 24, 2022 · 1 comment · Fixed by #18513
Labels
Area: tests Area: tests and testing framework Area: timers Area: timer subsystems Type: bug The issue reports a bug / The PR fixes a bug (including spelling errors)

Comments

@chrysn
Copy link
Member

chrysn commented Aug 24, 2022

Description

The test tests/periph_timer_short_relative_set fails on many platforms (microbit-v2, iotlab-m3, samr21), but the

Steps to reproduce the issue

  • Manage a release, run compile-and-test. (OK, that's optional).
  • Run make -C tests/periph_tiomer_short_relative_set BOARD=microbit-v2.
  • See it fail.
  • Poke around at the history of the file and look into what it does, without any conclusion.
  • Look at the README -- it says that "this will probably underflow", and that "as of this writing", "samr32-xpro fails for values below 8" -- indicating that all this is perfectly normal

Expected behavior

Tests should be reasonable, and test for properties that need to be upheld. If the test is for something that would be good but is not attained currently, it should say so whenever it is run automatically. (For example, say "skipped", "expected failure" or anything like that, but not be grouped with real failures). If there is no expectation of ever fulfilling the test, there should be some boundary that is clearly communicated also to the users of that API, and the test adjusted to that. (For example, it could say that setting a timer needs to be at least N ticks in advance, where N might be a platform define, and then it is tested for whether that N is upheld, and setting the timer for shorter times should err somehow).

Versions

Current master and 2022.07-RC4

@chrysn chrysn added Type: bug The issue reports a bug / The PR fixes a bug (including spelling errors) Area: tests Area: tests and testing framework Area: timers Area: timer subsystems labels Aug 24, 2022
@kaspar030
Copy link
Contributor

Sorry if the test caused frustration ...

Tests should be reasonable, and test for properties that need to be upheld.

The test was meant to be a tool that can detect the issue. It does fail on most MCUs unfortunately, and I think this is not trivial to fix.

I agree that the wording could be much clearer on this. Also, there's still a real issue if this test is failing, either in the implementations or in the API docs.

If the test is for something that would be good but is not attained currently, it should say so whenever it is run automatically.

compile_and_run_for_board.py is not honoring the CI blacklist. TEST_ON_CI_BLACKLIST += all should mean "don't run this test automatically`. I ran into this (what I'd call "false negatives") in the past. So for compile-and-run output, whenever it fails, first step is to figure out if that test should have run in the first place.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area: tests Area: tests and testing framework Area: timers Area: timer subsystems Type: bug The issue reports a bug / The PR fixes a bug (including spelling errors)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants