Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TestPodConnectivity failed in dual stack e2e test #2365

Closed
zyiou opened this issue Jul 8, 2021 · 3 comments
Closed

TestPodConnectivity failed in dual stack e2e test #2365

zyiou opened this issue Jul 8, 2021 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@zyiou
Copy link
Contributor

zyiou commented Jul 8, 2021

Describe the bug
Two TestPodConnectivity tests TestPodConnectivityDifferentNodes and TestPodConnectivityDifferentNodes failed when running Jenkins dual stack e2e tests.

To Reproduce
I was using #2361 when detecting this failure. This PR should not have affect for this failure as far as I know. Just trigger dual stack e2e tests.

Expected
Tests get passed.

Actual behavior
A clear and concise description of what's the actual behavior. If applicable, add screenshots, log messages, etc. to help explain the problem.

Versions:
Please provide the following information:

  • Antrea version (Docker image tag).
  • Kubernetes version (use kubectl version). If your Kubernetes components have different versions, please provide the version for all of them.
  • Container runtime: which runtime are you using (e.g. containerd, cri-o, docker) and which version are you using?
  • Linux kernel version on the Kubernetes Nodes (uname -r).
  • If you chose to compile the Open vSwitch kernel module manually instead of using the kernel module built into the Linux kernel, which version of the OVS kernel module are you using? Include the output of modinfo openvswitch for the Kubernetes Nodes.

Additional context
Add any other context about the problem here, such as Antrea logs, kubelet logs, etc.

=== RUN   TestPodConnectivityDifferentNodes
    fixtures.go:165: Creating 'antrea-test' K8s Namespace
    fixtures.go:128: Applying Antrea YAML
    fixtures.go:132: Waiting for all Antrea DaemonSet Pods
    fixtures.go:136: Checking CoreDNS deployment
    connectivity_test.go:160: Error when waiting for DaemonSet Pods to get IPs: timed out waiting for the condition
    fixtures.go:239: Exporting test logs to '/var/lib/jenkins/workspace/antrea-ipv6-ds-e2e-for-pull-request/antrea-test-logs/TestPodConnectivityDifferentNodes/beforeTeardown.Jul08-05-13-14'
    fixtures.go:343: Error when exporting kubelet logs: error when running journalctl on Node 'antrea-ipv6-9-0', is it available? Error: <nil>
    fixtures.go:364: Deleting 'antrea-test' K8s Namespace
--- FAIL: TestPodConnectivityDifferentNodes (154.68s)
=== RUN   TestPodConnectivityAfterAntreaRestart
    fixtures.go:165: Creating 'antrea-test' K8s Namespace
    fixtures.go:128: Applying Antrea YAML
    fixtures.go:132: Waiting for all Antrea DaemonSet Pods
    fixtures.go:136: Checking CoreDNS deployment
    connectivity_test.go:160: Error when waiting for DaemonSet Pods to get IPs: timed out waiting for the condition
    fixtures.go:239: Exporting test logs to '/var/lib/jenkins/workspace/antrea-ipv6-ds-e2e-for-pull-request/antrea-test-logs/TestPodConnectivityAfterAntreaRestart/beforeTeardown.Jul08-05-15-49'
    fixtures.go:343: Error when exporting kubelet logs: error when running journalctl on Node 'antrea-ipv6-9-0', is it available? Error: <nil>
    fixtures.go:364: Deleting 'antrea-test' K8s Namespace
--- FAIL: TestPodConnectivityAfterAntreaRestart (148.12s)
@zyiou zyiou added the kind/bug Categorizes issue or PR as related to a bug. label Jul 8, 2021
@antoninbas
Copy link
Contributor

I see that the tests for #2361 consistently fail because of this. @lzhecheng do we know what could cause this?

@lzhecheng
Copy link
Contributor

@antoninbas I'm looking into it.

@lzhecheng
Copy link
Contributor

One VM of the first testbed in queue, has difficulty pulling the agnhost image. I maunally triggered the test and the image was pulled finally. The latest 2 ds-e2e builds are successful, one is on that first testbed in queue and one is on another.

Closing this issue. Please reopen it and ping me if the problem still exists.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants