Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add debug info to integration test failures #145

Closed
bobcatfish opened this issue Oct 12, 2018 · 2 comments
Closed

Add debug info to integration test failures #145

bobcatfish opened this issue Oct 12, 2018 · 2 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. meaty-juicy-coding-work This task is mostly about implementation!!! And docs and tests of course but that's a given

Comments

@bobcatfish
Copy link
Collaborator

Expected Behavior

If an integration test fails, we should get as much information as possible about the CRDs that existed at the time of the failure so that we can debug, especially in the case where a test fails in prow and not locally.

Actual Behavior

We were trying to dump as much info as we could in our e2e script but since each test runs in its own namespace and cleans up afterward, this is too late and there is nothing to dump.

Additionally, because of #143 we can't even get any logs out of the Build and we have to guess why things are failing a lot of the time.

Steps to Reproduce the Problem

  1. Make an integration test fail in pull request (e.g. try to reference a volume that doesn't exist)
  2. You'll see little more than timed out waiting for the condition in the logs :(
@bobcatfish bobcatfish added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Oct 12, 2018
bobcatfish added a commit to bobcatfish/pipeline that referenced this issue Oct 12, 2018
I think it's reasonable for only one of our eventually many integration
tests to verify the build output, especially when it involves adding a
volume mount to the pile of things that could go wrong in the test.

Refactored the test a bit, so we don't assert inside the test, and we
output some logs before polling.

Removed dumping of CRDs in test script b/c each test runs in its own
namespace and cleans up after itself, so there is never anything to dump
(see tektoncd#145).

Updated condition checking so that if the Run fails, we bail immediately
instead of continuing to hope it will succeed.
bobcatfish added a commit to bobcatfish/pipeline that referenced this issue Oct 12, 2018
I think it's reasonable for only one of our eventually many integration
tests to verify the build output, especially when it involves adding a
volume mount to the pile of things that could go wrong in the test.

Refactored the test a bit, so we don't assert inside the test, and we
output some logs before polling.

Removed dumping of CRDs in test script b/c each test runs in its own
namespace and cleans up after itself, so there is never anything to dump
(see tektoncd#145).

Updated condition checking so that if the Run fails, we bail immediately
instead of continuing to hope it will succeed.
@bobcatfish bobcatfish added the meaty-juicy-coding-work This task is mostly about implementation!!! And docs and tests of course but that's a given label Oct 12, 2018
bobcatfish added a commit to bobcatfish/pipeline that referenced this issue Oct 12, 2018
I think it's reasonable for only one of our eventually many integration
tests to verify the build output, especially when it involves adding a
volume mount to the pile of things that could go wrong in the test.

Refactored the test a bit, so we don't assert inside the test, and we
output some logs before polling.

Removed dumping of CRDs in test script b/c each test runs in its own
namespace and cleans up after itself, so there is never anything to dump
(see tektoncd#145).

Updated condition checking so that if the Run fails, we bail immediately
instead of continuing to hope it will succeed.
bobcatfish added a commit to bobcatfish/pipeline that referenced this issue Oct 12, 2018
I think it's reasonable for only one of our eventually many integration
tests to verify the build output, especially when it involves adding a
volume mount to the pile of things that could go wrong in the test.

Refactored the test a bit, so we don't assert inside the test, and we
output some logs before polling.

Removed dumping of CRDs in test script b/c each test runs in its own
namespace and cleans up after itself, so there is never anything to dump
(see tektoncd#145).

Updated condition checking so that if the Run fails, we bail immediately
instead of continuing to hope it will succeed.
bobcatfish added a commit to bobcatfish/pipeline that referenced this issue Oct 12, 2018
I think it's reasonable for only one of our eventually many integration
tests to verify the build output, especially when it involves adding a
volume mount to the pile of things that could go wrong in the test.

Refactored the test a bit, so we don't assert inside the test, and we
output some logs before polling.

Removed dumping of CRDs in test script b/c each test runs in its own
namespace and cleans up after itself, so there is never anything to dump
(see tektoncd#145).

Updated condition checking so that if the Run fails, we bail immediately
instead of continuing to hope it will succeed.
knative-prow-robot pushed a commit that referenced this issue Oct 13, 2018
I think it's reasonable for only one of our eventually many integration
tests to verify the build output, especially when it involves adding a
volume mount to the pile of things that could go wrong in the test.

Refactored the test a bit, so we don't assert inside the test, and we
output some logs before polling.

Removed dumping of CRDs in test script b/c each test runs in its own
namespace and cleans up after itself, so there is never anything to dump
(see #145).

Updated condition checking so that if the Run fails, we bail immediately
instead of continuing to hope it will succeed.
@bobcatfish
Copy link
Collaborator Author

@tanner-bruce is looking into this 😄

tanner-bruce pushed a commit to tanner-bruce/build-pipeline that referenced this issue Oct 24, 2018
This code will dump out all build-pipeline CRDs when a test fails to
make debugging easier.

Fixes tektoncd#145.
tanner-bruce pushed a commit to tanner-bruce/build-pipeline that referenced this issue Oct 24, 2018
This code will dump out all build-pipeline CRDs when a test fails to
make debugging easier.

Fixes tektoncd#145.
tanner-bruce pushed a commit to tanner-bruce/build-pipeline that referenced this issue Oct 29, 2018
tanner-bruce pushed a commit to tanner-bruce/build-pipeline that referenced this issue Oct 29, 2018
This will cause the `teardown` function to dump all Knative CRD objects
for the currently configured namepsace to YAML. This will make debugging
failed builds easier.

Fixes tektoncd#145.
tanner-bruce pushed a commit to tanner-bruce/build-pipeline that referenced this issue Oct 29, 2018
This will cause the `teardown` function to dump all Knative CRD objects
for the currently configured namepsace to YAML. This will make debugging
failed builds easier.

Fixes tektoncd#145.
tanner-bruce pushed a commit to tanner-bruce/build-pipeline that referenced this issue Oct 29, 2018
This will cause the `teardown` function to dump all Knative CRD objects
for the currently configured namepsace to YAML. This will make debugging
failed builds easier.

Fixes tektoncd#145.
knative-prow-robot pushed a commit that referenced this issue Oct 30, 2018
This will cause the `teardown` function to dump all Knative CRD objects
for the currently configured namepsace to YAML. This will make debugging
failed builds easier.

Fixes #145.
@bobcatfish
Copy link
Collaborator Author

Nice one, thanks again @tanner-bruce 😎

chmouel pushed a commit to chmouel/tektoncd-pipeline that referenced this issue Sep 30, 2019
🤖 Triggering CI on branch 'release-next' after synching to upstream/master
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. meaty-juicy-coding-work This task is mostly about implementation!!! And docs and tests of course but that's a given
Projects
None yet
Development

No branches or pull requests

1 participant