Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test failure annotations on v1.8+ #58

Merged
merged 15 commits into from
Aug 4, 2022

Conversation

ericphanson
Copy link
Member

@ericphanson ericphanson commented Aug 2, 2022

Example: https://github.com/ericphanson/LicenseCheck.jl/pull/9/files
Screen Shot 2022-08-02 at 12 43 26

Similar to julia-actions/julia-docdeploy#16 but with a hackier implementation. There, we just modified documenter to @error log out the failures w/ correct line number info. Here, it's not so easy to modify Test to do the same, especially because I think it would complicate testing things like the logging stdlib. Instead, I parse the output to look for printed test failures. This only works on Julia 1.8+ because Pkg.test doesn't respect the io argument prior to that. Perhaps there's a way with redirect_stdout but it seemed difficult because we also want to print to stdout as well.

The logic gets a bit complicated, so I also added some tests and a CI script for it.

This will create 1 annotation per failing test for each run, so if you test on 3 OS's and they all fail the same test, you'll get 3 annotations. However, one can set annotate: false to disable annotations, or do that conditionally (something like annotate: {{ matrix.os == "ubuntu-latest" }}, I think, to only generate annotations from 1 OS). It also only works on Julia 1.8+ (including RCs).

@ericphanson ericphanson requested a review from a team as a code owner August 2, 2022 10:44
@codecov-commenter
Copy link

codecov-commenter commented Aug 2, 2022

Codecov Report

Merging #58 (2fadb2b) into master (101f621) will not change coverage.
The diff coverage is n/a.

@@            Coverage Diff            @@
##            master       #58   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files            1         1           
  Lines            2         2           
=========================================
  Hits             2         2           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 101f621...2fadb2b. Read the comment docs.

Copy link
Member

@SaschaMann SaschaMann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once again I wish Julia had some machine-readable test output by default

action.yml Outdated Show resolved Hide resolved
test_wrapper.jl Outdated Show resolved Hide resolved
else
Pkg.test(; kwargs...)
end' )
julia_cmd=( julia --color=yes --depwarn=${{ inputs.depwarn }} --inline=${{ inputs.inline }} --project=${{ inputs.project }} -e 'include(joinpath(ENV["GITHUB_ACTION_PATH"], "test_harness.jl"))' )
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still have -e because I don't know how to do the joinpath and env variables in bash

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think "$GITHUB_ACTION_PATH/test_harness.jl" would do the trick but this is more approachable from a Julia PoV so it's probably better.

action.yml Outdated
@@ -29,6 +29,9 @@ inputs:
project:
description: 'Value passed to the --project flag. The default value is the repository root: "@."'
default: '@.'
annotate:
description: 'Whether or not to attempt to create GitHub annotations to show test failures inline. Only effective on Julia 1.8+.'
default: 'true'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest that we make this feature opt-in. Otherwise, when Julia 1.8 comes out, people will suddenly start getting these annotations. A lot of people do not like in-line annotations because it makes reviewing PR diffs harder, so having it suddenly be enabled by default will likely give some backlash.

Suggested change
default: 'true'
default: 'false'

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You just press a to hide annotations, but I guess most people don't know that

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TIL. But yeah I've observed multiple users that dislike e.g. Codecov's inline annotations, so I'm very hesistant to turn this on by default.

Copy link
Member Author

@ericphanson ericphanson Aug 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, makes sense; I've made the change. My concern is no one will know about it if it is opt-in, but I guess I can post on discourse or something.

TIL

FYI you can press i to toggle comments when reviewing. Another useful one.

kwargs.jl Outdated Show resolved Hide resolved
ericphanson and others added 2 commits August 2, 2022 15:12
Co-authored-by: Dilum Aluthge <[email protected]>
@ericphanson ericphanson requested a review from SaschaMann August 2, 2022 13:24
Copy link
Member

@SaschaMann SaschaMann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be fine with it being opt-out, too, since that behaviour is quite common for comparable actions. But I don't have a strong opinion on it.


This seems useful, I'm mostly relying on the test case you added for reviewing the correctness.

else
Pkg.test(; kwargs...)
end' )
julia_cmd=( julia --color=yes --depwarn=${{ inputs.depwarn }} --inline=${{ inputs.inline }} --project=${{ inputs.project }} -e 'include(joinpath(ENV["GITHUB_ACTION_PATH"], "test_harness.jl"))' )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think "$GITHUB_ACTION_PATH/test_harness.jl" would do the trick but this is more approachable from a Julia PoV so it's probably better.

@SaschaMann SaschaMann requested a review from DilumAluthge August 2, 2022 15:46
@ericphanson
Copy link
Member Author

Maybe we can start opt-in and folks can try it out, and then see about flipping the switch later. Or seeing if PkgTemplates will take the opt-in :). That’s the roundabout I used for getting doctests to fail by “default”…

@SaschaMann
Copy link
Member

Could you add a brief example to the README? That way it's a bit more discoverable

@ericphanson
Copy link
Member Author

Good idea, I added it to the example and noted the default value may change.

@ericphanson
Copy link
Member Author

Bump :)

@SaschaMann SaschaMann merged commit 7ea3b3e into julia-actions:master Aug 4, 2022
@SaschaMann
Copy link
Member

Thanks for adding this!

I've tagged a release: https://github.com/julia-actions/julia-runtest/releases/tag/v1.8.0

Given that it's opt-in, do you want to make some noise about it on Slack/Zulip/Discourse? :)

@ericphanson ericphanson deleted the eph/test-annotations branch August 4, 2022 10:50
@ericphanson
Copy link
Member Author

Sure! I will push it a bit at my company with some PRs and get some feedback, and then assuming all goes well, make some noise in a week or two.

@ericphanson
Copy link
Member Author

Ah one thing which I forgot about is that GitHub annotations only show up on a diff, so if a code change causes a test failure somewhere else, and that test line doesn't show up in the diff, then github doesn't show an annotation. Not really sure how else it should work, but it does mean it's only helpful when you're also messing with the tests directly in the PR.

@SaschaMann
Copy link
Member

SaschaMann commented Aug 4, 2022

I don't see a clear way around that other than taking some guesses based on the test and commit content and the stacktrace, which may or may not be reliable. But I think it's good to have these anyway, even if they won't show up for every single failure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants