-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "skip" outcome for testcaserunfinished #140
Comments
thanks @dan-han-101 - just wondering what the use-case is for being notified of a skipped test? could it be to block some downstream action? i.e. if test X is skipped don't deploy component Y ? |
The purpose of sending events can be divided in two main categories - observability and triggering. Sending an event to notify that a test i skipped would most certainly not trigger any downstream action, but could be valuable for observability reasons. |
@e-backmark-ericsson's comment on observability is the main driving factor for adding a "skipped" concept. Answering some of the follow-up questions in line below.
In our company, we have some test drivers that will initiate test runs that run in python, with pytest, or run c++ tests with gtest. These testing suites can dynamically determine whether tests should run or not. For example, they may skip some tests if the host machine is not a given architecture.
For our use case, we generally can send a queued/started message first and then "skip" message later.
This is an interesting idea. I think having alternative predicates for 'cancelled' and 'skipped' would meet our use case. However, I'm not sure if that is better or worse than using an enum and putting them all under testcaserun.finished.
I think omitting them is possible, but it reduces visibility. If a test engine wants to "report" skipped tests by simply not sending a message, that it still possible. I think our users would like to have the option of an explicit message for skips. I could imagine scenarios where users are not even aware that some tests are skipped. |
Thanks @dan-han-101 for this proposal. Having the "skip" data allows us to distinguish between tests that have been removed and tests that are not executed for some reason. I think we should give some guidance in the spec about the expected sequences of events.
I think that using an "enum" within the event, as opposed to multiple events, makes it easier to define the expected flow, and it makes it easier for consumers to know what to expect. For instance, in python it's possible to exclude a test from discovery in certain condition, but it's also possible to invoke "skip" for a test once the test start executing based on conditions that might not have been available at discovery time. My proposal would be to start a PR and include the description of the event sequences along with the subject descriptions. @e-backmark-ericsson wdyt? |
Yes, I'd be happy to start a Pull Request ! |
Adds "skip" to the set of possible outcomes for a testcaserun.finished message. A final outcome of "skip" is common for many test frameworks, so adding it as a possible outcome will improve interoperability. See further discussion in cdevents#140.
The documentation update sets expecations on the sequence of expected test events. This update was triggered by discussions in cdevents#140
The documentation update sets expecations on the sequence of expected test events. This update was triggered by discussions in cdevents#140 Signed-off-by: Daniel Han <[email protected]>
The documentation update sets expecations on the sequence of expected test events. This update was triggered by discussions in cdevents#140 Signed-off-by: Daniel Han <[email protected]>
Adds "skip" to the set of possible outcomes for a testcaserun.finished message. A final outcome of "skip" is common for many test frameworks, so adding it as a possible outcome will improve interoperability. See further discussion in cdevents#140. Signed-off-by: Daniel Han <[email protected]>
I've thought some about this, based on the earlier comments on this PR. These are my findings/suggestions:
As an event consumer I'd like to be able to monitor the test case execution durations by diffing between the testCaseRun.started and testCaseRun.finished events. If we would use the testCaseRun.finished event to also signal that a testCase never even started, I believe the event consumers could be confused and test case duration counters could be unreliable. Does this make sense? If so, we should add testCaseRun.skipped and testCaseRun.canceled events to the spec. |
@e-backmark-ericsson - thanks for the careful thoughts on this issue. @afrittoli - do you generally agree with the above? In terms of mechanical changes, I'm thinking it might be better to close #146 (updates to docs), and update both the code and documentation all in #140. I think it will be easy enough and relevant to the same stuff, so one PR is good enough. This will ensure that the code is in sync with the spec. Sound OK? |
It sounds reasonable. I like the idea of having enough information in the message types to know what other events one may expect about that specific subject.
Sounds good! |
Adds "skipped" as a new predicate for testcaserun events. A final outcome of "skip" is common for many test frameworks, so adding it as a possible outcome will improve interoperability. See further discussion in cdevents#140. Signed-off-by: Daniel Han <[email protected]>
Adds "skipped" as a new predicate for testcaserun events. A final outcome of "skip" is common for many test frameworks, so adding it as a possible outcome will improve interoperability. See further discussion in cdevents#140. Signed-off-by: Daniel Han <[email protected]>
Adds "skipped" as a new predicate for testcaserun events. A final outcome of "skip" is common for many test frameworks, so adding it as a possible outcome will improve interoperability. See further discussion in cdevents#140. Signed-off-by: Daniel Han <[email protected]>
Adds "skipped" as a new predicate for testcaserun events. A final outcome of "skip" is common for many test frameworks, so adding it as a possible outcome will improve interoperability. See further discussion in #140. Signed-off-by: Daniel Han <[email protected]>
The current set of values for testcaserunfinished outcomes are "pass", "fail", "cancel", and "error".
Can another enum be added for "skip"?
The concept of a skipped tests is common in other testing frameworks, like pytest.
If this sounds like a good update, I'd be happy to send in a Pull Request.
The text was updated successfully, but these errors were encountered: