Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "skip" outcome for testcaserunfinished #140

Open
dan-han-101 opened this issue Jun 2, 2023 · 8 comments
Open

Add "skip" outcome for testcaserunfinished #140

dan-han-101 opened this issue Jun 2, 2023 · 8 comments
Labels
roadmap Items on the roadmap

Comments

@dan-han-101
Copy link
Contributor

dan-han-101 commented Jun 2, 2023

The current set of values for testcaserunfinished outcomes are "pass", "fail", "cancel", and "error".

Can another enum be added for "skip"?
The concept of a skipped tests is common in other testing frameworks, like pytest.

If this sounds like a good update, I'd be happy to send in a Pull Request.

@olensmar
Copy link
Contributor

olensmar commented Jun 3, 2023

thanks @dan-han-101 - just wondering what the use-case is for being notified of a skipped test? could it be to block some downstream action? i.e. if test X is skipped don't deploy component Y ?

@e-backmark-ericsson
Copy link
Contributor

The purpose of sending events can be divided in two main categories - observability and triggering. Sending an event to notify that a test i skipped would most certainly not trigger any downstream action, but could be valuable for observability reasons.
With this said, I'm not sure it would fit in a testcaserunfinished event. If a testcase is skipped I would guess it was not even started and neither queued.
@dan-han-101 , what component in a test engine would you see determine whether to skip a testcaserun or not? And in what scenario? Do you see that a testcaserun.queued/started is sent before such a finished event with outcome "skip"?
I can see a scenario when a testcaserun is queued but never started and instead canceled, and we currently lack a testcaserun.canceled event to notify about that.
I can also see that testsuites can be dynamically created, so they don't contain the same set of testcases for every run, meaning that some testcases could be considered "skipped" when they are not part of the testsuite. Would that solve your usecase?

@dan-han-101
Copy link
Contributor Author

@e-backmark-ericsson's comment on observability is the main driving factor for adding a "skipped" concept. Answering some of the follow-up questions in line below.

what component in a test engine would you see determine whether to skip a testcaserun or not? And in what scenario?

In our company, we have some test drivers that will initiate test runs that run in python, with pytest, or run c++ tests with gtest. These testing suites can dynamically determine whether tests should run or not. For example, they may skip some tests if the host machine is not a given architecture.

Do you see that a testcaserun.queued/started is sent before such a finished event with outcome "skip"?

For our use case, we generally can send a queued/started message first and then "skip" message later.

I can see a scenario when a testcaserun is queued but never started and instead canceled, and we currently lack a testcaserun.canceled event to notify about that.

This is an interesting idea. I think having alternative predicates for 'cancelled' and 'skipped' would meet our use case. However, I'm not sure if that is better or worse than using an enum and putting them all under testcaserun.finished.

I can also see that testsuites can be dynamically created, so they don't contain the same set of testcases for every run, meaning that some testcases could be considered "skipped" when they are not part of the testsuite. Would that solve your usecase?

I think omitting them is possible, but it reduces visibility. If a test engine wants to "report" skipped tests by simply not sending a message, that it still possible.

I think our users would like to have the option of an explicit message for skips. I could imagine scenarios where users are not even aware that some tests are skipped.

@afrittoli
Copy link
Contributor

Thanks @dan-han-101 for this proposal.
Personally, I think skip make sense to track a decision not to run a test.
I've seen situations where a change inadvertently caused a number of tests to start skipping in CI so that's a bit of information I would definitely be interested in keeping an eye on.

Having the "skip" data allows us to distinguish between tests that have been removed and tests that are not executed for some reason.

I think we should give some guidance in the spec about the expected sequences of events.
This is not something we've done so far, but it would be beneficial for interoperability options.
Something along the lines of:

"If a producer sends a testrun started event it MUST send a testrun finished event too."

I think that using an "enum" within the event, as opposed to multiple events, makes it easier to define the expected flow, and it makes it easier for consumers to know what to expect. skip is a bit of a special case though, since the skip decision may happen at different times, depending on the reason to skip and on the framework, so it could be that a test that is skipped is never really started.

For instance, in python it's possible to exclude a test from discovery in certain condition, but it's also possible to invoke "skip" for a test once the test start executing based on conditions that might not have been available at discovery time.

My proposal would be to start a PR and include the description of the event sequences along with the subject descriptions. @e-backmark-ericsson wdyt?
@dan-han-101 would you be interested in starting a PR?

@dan-han-101
Copy link
Contributor Author

Yes, I'd be happy to start a Pull Request !
I'll link it to this issue after it's created.

@afrittoli afrittoli added the roadmap Items on the roadmap label Jun 19, 2023
dan-han-101 pushed a commit to dan-han-101/spec that referenced this issue Jun 26, 2023
Adds "skip" to the set of possible outcomes for a testcaserun.finished
message. A final outcome of "skip" is common for many test frameworks,
so adding it as a possible outcome will improve interoperability.

See further discussion in cdevents#140.
dan-han-101 pushed a commit to dan-han-101/spec that referenced this issue Jun 26, 2023
The documentation update sets expecations on the sequence of expected
test events.

This update was triggered by discussions in
cdevents#140
dan-han-101 pushed a commit to dan-han-101/spec that referenced this issue Aug 1, 2023
The documentation update sets expecations on the sequence of expected
test events.

This update was triggered by discussions in
cdevents#140

Signed-off-by: Daniel Han <[email protected]>
dan-han-101 added a commit to dan-han-101/spec that referenced this issue Aug 1, 2023
The documentation update sets expecations on the sequence of expected
test events.

This update was triggered by discussions in
cdevents#140

Signed-off-by: Daniel Han <[email protected]>
dan-han-101 added a commit to dan-han-101/spec that referenced this issue Aug 1, 2023
Adds "skip" to the set of possible outcomes for a testcaserun.finished
message. A final outcome of "skip" is common for many test frameworks,
so adding it as a possible outcome will improve interoperability.

See further discussion in cdevents#140.

Signed-off-by: Daniel Han <[email protected]>
@e-backmark-ericsson
Copy link
Contributor

I've thought some about this, based on the earlier comments on this PR. These are my findings/suggestions:

  • When a test activity starts there would supposedly be testSuiteRun queued and started events sent. A test suite could contain a hard coded list of test cases to execute, or it could contain some wildcard syntax that is evaluated in runtime
  • If a test suite with hard coded test cases in it is run, and the test engine decides to skip some of those test cases, I believe it would be suitable to send testCaseRun.skipped events for those, without any previous testCaseRun.queued or testCaseRun.started events sent for them.
  • If a test case execution is queued, but never started (thrown out of the queue for any reason) I believe a testCaseRun.canceled event should be sent, stating the reason for it being canceled.
  • If a test case execution is started, and then as @afrittoli mentioned in a previous comment, is "skipped" in its initialization phase, I believe a testCaseRun.finished event should be sent with a relevant outcome (currently "cancel", but we could maybe add "skip" as well)

As an event consumer I'd like to be able to monitor the test case execution durations by diffing between the testCaseRun.started and testCaseRun.finished events. If we would use the testCaseRun.finished event to also signal that a testCase never even started, I believe the event consumers could be confused and test case duration counters could be unreliable.

Does this make sense? If so, we should add testCaseRun.skipped and testCaseRun.canceled events to the spec.

@dan-han-101
Copy link
Contributor Author

@e-backmark-ericsson - thanks for the careful thoughts on this issue.

@afrittoli - do you generally agree with the above?

In terms of mechanical changes, I'm thinking it might be better to close #146 (updates to docs), and update both the code and documentation all in #140. I think it will be easy enough and relevant to the same stuff, so one PR is good enough. This will ensure that the code is in sync with the spec.

Sound OK?

@afrittoli
Copy link
Contributor

@e-backmark-ericsson - thanks for the careful thoughts on this issue.

@afrittoli - do you generally agree with the above?

It sounds reasonable. I like the idea of having enough information in the message types to know what other events one may expect about that specific subject.

In terms of mechanical changes, I'm thinking it might be better to close #146 (updates to docs), and update both the code and documentation all in #140. I think it will be easy enough and relevant to the same stuff, so one PR is good enough. This will ensure that the code is in sync with the spec.

Sound OK?

Sounds good!

dan-han-101 added a commit to dan-han-101/spec that referenced this issue Mar 1, 2024
Adds "skipped" as a new predicate for testcaserun events.
A final outcome of "skip" is common for many test frameworks,
so adding it as a possible outcome will improve interoperability.

See further discussion in cdevents#140.

Signed-off-by: Daniel Han <[email protected]>
dan-han-101 added a commit to dan-han-101/spec that referenced this issue Mar 1, 2024
Adds "skipped" as a new predicate for testcaserun events.
A final outcome of "skip" is common for many test frameworks,
so adding it as a possible outcome will improve interoperability.

See further discussion in cdevents#140.

Signed-off-by: Daniel Han <[email protected]>
dan-han-101 added a commit to dan-han-101/spec that referenced this issue Mar 1, 2024
Adds "skipped" as a new predicate for testcaserun events.
A final outcome of "skip" is common for many test frameworks,
so adding it as a possible outcome will improve interoperability.

See further discussion in cdevents#140.

Signed-off-by: Daniel Han <[email protected]>
afrittoli pushed a commit that referenced this issue Mar 5, 2024
Adds "skipped" as a new predicate for testcaserun events.
A final outcome of "skip" is common for many test frameworks,
so adding it as a possible outcome will improve interoperability.

See further discussion in #140.

Signed-off-by: Daniel Han <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
roadmap Items on the roadmap
Projects
None yet
Development

No branches or pull requests

4 participants