diff --git a/v1.24/kubeoperator/PRODUCT.yaml b/v1.24/kubeoperator/PRODUCT.yaml new file mode 100644 index 0000000000..dee44f7f0d --- /dev/null +++ b/v1.24/kubeoperator/PRODUCT.yaml @@ -0,0 +1,9 @@ +vendor: Hangzhou FIT2CLOUD Information Technology Co., Ltd. +name: KubeOperator +version: v3.16.2 +website_url: https://www.kubeoperator.io/ +repo_url: https://github.com/KubeOperator/KubeOperator +documentation_url: https://kubeoperator.io/docs/ +product_logo_url: https://raw.githubusercontent.com/KubeOperator/KubeOperator/master/kubeoperator-logo.svg?sanitize=true +type: Installer +description: KubeOperator is an open-source light-weighted Kubernetes distribution that focuses on helping enterprises plan, deploy, and operate production-grade Kubernetes clusters in an offline network environment. It has a graphic Web UI that fasten up the process of software lifecycle in this current rapid cloud age. diff --git a/v1.24/kubeoperator/README.md b/v1.24/kubeoperator/README.md new file mode 100644 index 0000000000..641eaaa0d6 --- /dev/null +++ b/v1.24/kubeoperator/README.md @@ -0,0 +1,78 @@ +# Conformance tests for KubeOperator v3.16.2 + +## Install KubeOperator v3.16.2 + +Follow the [installation](https://kubeoperator.io/docs/installation/install/) to install KubeOperator. + +```bash +$ curl -sSL https://github.com/KubeOperator/KubeOperator/releases/latest/download/quick_start.sh | sh +``` + +Wait until service running successfully. + +## Deploy Kubernetes + +Deploy Kubernetes according to the [documentation](https://kubeoperator.io/docs/quick_start/cluster_planning/manual/). + +1. System Settings +- Before using the KubeOperator, you must set the necessary parameters for the KubeOperator. These system parameters will affect the installation of the Kubernetes cluster and access to related services. + +2. Prepare The Servers +- We will prepare to add three servers, one master and two workers. + +3. Host Authorization +- Authorize the host to the project. + +4. Deploy Cluster +- Enter the project menu and click the "Add" button on the "Cluster" page to create the cluster. +![](cluster.png) + +## Run Conformance Test + +The standard tool for running these tests is +[Sonobuoy](https://github.com/heptio/sonobuoy). Sonobuoy is +regularly built and kept up to date to execute against all +currently supported versions of kubernetes. + +Download a [binary release](https://github.com/heptio/sonobuoy/releases) of the CLI + +Deploy a Sonobuoy pod to your cluster with: + +``` +$ sonobuoy run --mode=certified-conformance +``` + +**NOTE:** You can run the command synchronously by adding the flag `--wait` but be aware that running the Conformance tests can take an hour or more. + +View actively running pods: + +``` +$ sonobuoy status +``` + +To inspect the logs: + +``` +$ sonobuoy logs +``` + +Once `sonobuoy status` shows the run as `completed`, copy the output directory from the main Sonobuoy pod to a local directory: + +``` +$ outfile=$(sonobuoy retrieve) +``` + +This copies a single `.tar.gz` snapshot from the Sonobuoy pod into your local +`.` directory. Extract the contents into `./results` with: + +``` +mkdir ./results; tar xzf $outfile -C ./results +``` + +**NOTE:** The two files required for submission are located in the tarball under **plugins/e2e/results/{e2e.log,junit.xml}**. + +To clean up Kubernetes objects created by Sonobuoy, run: + +``` +sonobuoy delete +``` diff --git a/v1.24/kubeoperator/cluster.png b/v1.24/kubeoperator/cluster.png new file mode 100644 index 0000000000..7f6642728b Binary files /dev/null and b/v1.24/kubeoperator/cluster.png differ diff --git a/v1.24/kubeoperator/e2e.log b/v1.24/kubeoperator/e2e.log new file mode 100644 index 0000000000..a5c4872910 --- /dev/null +++ b/v1.24/kubeoperator/e2e.log @@ -0,0 +1,15584 @@ +I0907 07:39:55.993721 19 e2e.go:129] Starting e2e run "7c49f96c-a108-4029-8b40-c680af004a9e" on Ginkgo node 1 +{"msg":"Test Suite starting","total":356,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1662536395 - Will randomize all specs +Will run 356 of 6971 specs + +Sep 7 07:39:58.111: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 07:39:58.112: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Sep 7 07:39:58.137: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Sep 7 07:39:58.159: INFO: 9 / 9 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Sep 7 07:39:58.159: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready. +Sep 7 07:39:58.159: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Sep 7 07:39:58.165: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Sep 7 07:39:58.165: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-local-dns' (0 seconds elapsed) +Sep 7 07:39:58.165: INFO: e2e test version: v1.24.2 +Sep 7 07:39:58.166: INFO: kube-apiserver version: v1.24.2 +Sep 7 07:39:58.166: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 07:39:58.170: INFO: Cluster IP family: ipv4 +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:39:58.171: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename daemonsets +Sep 7 07:39:58.220: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +W0907 07:39:58.220224 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should retry creating failed daemon pods [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Sep 7 07:39:58.258: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 07:39:58.258: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 07:39:59.337: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 07:39:59.337: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 07:40:00.266: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 07:40:00.266: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 07:40:01.264: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 07:40:01.264: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 07:40:02.266: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 07:40:02.266: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 07:40:03.282: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 07:40:03.282: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 07:40:07.206: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 07:40:07.206: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 07:40:07.267: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 07:40:07.267: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 07:40:08.266: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Sep 7 07:40:08.266: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Sep 7 07:40:08.288: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 07:40:08.288: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 07:40:09.296: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Sep 7 07:40:09.296: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4700, will wait for the garbage collector to delete the pods +Sep 7 07:40:09.355: INFO: Deleting DaemonSet.extensions daemon-set took: 3.343777ms +Sep 7 07:40:09.456: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.089379ms +Sep 7 07:40:13.963: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 07:40:13.963: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Sep 7 07:40:13.967: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1966"},"items":null} + +Sep 7 07:40:13.968: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1966"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:188 +Sep 7 07:40:13.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4700" for this suite. + +• [SLOW TEST:15.807 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should retry creating failed daemon pods [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":356,"completed":1,"skipped":29,"failed":0} +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:40:13.978: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 +[It] should test the lifecycle of a ReplicationController [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a ReplicationController +STEP: waiting for RC to be added +STEP: waiting for available Replicas +STEP: patching ReplicationController +STEP: waiting for RC to be modified +STEP: patching ReplicationController status +STEP: waiting for RC to be modified +STEP: waiting for available Replicas +STEP: fetching ReplicationController status +STEP: patching ReplicationController scale +STEP: waiting for RC to be modified +STEP: waiting for ReplicationController's scale to be the max amount +STEP: fetching ReplicationController; ensuring that it's patched +STEP: updating ReplicationController status +STEP: waiting for RC to be modified +STEP: listing all ReplicationControllers +STEP: checking that ReplicationController has expected values +STEP: deleting ReplicationControllers by collection +STEP: waiting for ReplicationController to have a DELETED watchEvent +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:188 +Sep 7 07:40:20.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-8983" for this suite. + +• [SLOW TEST:6.053 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should test the lifecycle of a ReplicationController [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":356,"completed":2,"skipped":29,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:40:20.031: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 07:40:20.521: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Sep 7 07:40:22.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 07:40:24.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 07:40:26.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 40, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 07:40:29.558: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + test/e2e/framework/framework.go:652 +STEP: fetching the /apis discovery document +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/admissionregistration.k8s.io discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 07:40:29.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6772" for this suite. +STEP: Destroying namespace "webhook-6772-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:9.594 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should include webhook resources in discovery documents [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":356,"completed":3,"skipped":41,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:40:29.626: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Discovering how many secrets are in namespace by default +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Secret +STEP: Ensuring resource quota status captures secret creation +STEP: Deleting a secret +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:188 +Sep 7 07:40:45.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6424" for this suite. + +• [SLOW TEST:16.092 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":356,"completed":4,"skipped":72,"failed":0} +SSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Service endpoints latency + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:40:45.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not be very high [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:40:45.736: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-2115 +I0907 07:40:45.740227 19 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2115, replica count: 1 +I0907 07:40:46.791542 19 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0907 07:40:47.792612 19 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0907 07:40:48.792820 19 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 07:40:48.902: INFO: Created: latency-svc-crrs4 +Sep 7 07:40:48.912: INFO: Got endpoints: latency-svc-crrs4 [19.256263ms] +Sep 7 07:40:48.928: INFO: Created: latency-svc-9v2kv +Sep 7 07:40:48.932: INFO: Created: latency-svc-spfkf +Sep 7 07:40:48.942: INFO: Got endpoints: latency-svc-9v2kv [30.03696ms] +Sep 7 07:40:48.942: INFO: Got endpoints: latency-svc-spfkf [29.058313ms] +Sep 7 07:40:48.947: INFO: Created: latency-svc-pjbvz +Sep 7 07:40:48.957: INFO: Created: latency-svc-vqhj7 +Sep 7 07:40:48.959: INFO: Got endpoints: latency-svc-pjbvz [46.457944ms] +Sep 7 07:40:48.967: INFO: Got endpoints: latency-svc-vqhj7 [54.556755ms] +Sep 7 07:40:48.971: INFO: Created: latency-svc-dfhmp +Sep 7 07:40:48.980: INFO: Got endpoints: latency-svc-dfhmp [67.355053ms] +Sep 7 07:40:49.060: INFO: Created: latency-svc-d7pq6 +Sep 7 07:40:49.069: INFO: Got endpoints: latency-svc-d7pq6 [156.456054ms] +Sep 7 07:40:49.079: INFO: Created: latency-svc-f6xmd +Sep 7 07:40:49.080: INFO: Created: latency-svc-77mvh +Sep 7 07:40:49.080: INFO: Created: latency-svc-swzsd +Sep 7 07:40:49.080: INFO: Created: latency-svc-jpfld +Sep 7 07:40:49.080: INFO: Created: latency-svc-lb98p +Sep 7 07:40:49.080: INFO: Created: latency-svc-xs2nr +Sep 7 07:40:49.089: INFO: Created: latency-svc-lgng8 +Sep 7 07:40:49.089: INFO: Created: latency-svc-l9pnt +Sep 7 07:40:49.089: INFO: Created: latency-svc-dxcmc +Sep 7 07:40:49.090: INFO: Created: latency-svc-wzk7h +Sep 7 07:40:49.090: INFO: Created: latency-svc-5kvjk +Sep 7 07:40:49.090: INFO: Created: latency-svc-df767 +Sep 7 07:40:49.090: INFO: Created: latency-svc-wpvgd +Sep 7 07:40:49.090: INFO: Created: latency-svc-z7jxb +Sep 7 07:40:49.090: INFO: Got endpoints: latency-svc-xs2nr [177.39505ms] +Sep 7 07:40:49.097: INFO: Got endpoints: latency-svc-lgng8 [184.722983ms] +Sep 7 07:40:49.109: INFO: Created: latency-svc-nmp5k +Sep 7 07:40:49.114: INFO: Got endpoints: latency-svc-wzk7h [201.211097ms] +Sep 7 07:40:49.114: INFO: Got endpoints: latency-svc-l9pnt [201.595707ms] +Sep 7 07:40:49.120: INFO: Got endpoints: latency-svc-df767 [160.954779ms] +Sep 7 07:40:49.120: INFO: Got endpoints: latency-svc-dxcmc [207.291099ms] +Sep 7 07:40:49.120: INFO: Got endpoints: latency-svc-5kvjk [207.619525ms] +Sep 7 07:40:49.120: INFO: Got endpoints: latency-svc-wpvgd [178.092784ms] +Sep 7 07:40:49.139: INFO: Got endpoints: latency-svc-z7jxb [196.869834ms] +Sep 7 07:40:49.140: INFO: Created: latency-svc-zkls9 +Sep 7 07:40:49.145: INFO: Got endpoints: latency-svc-77mvh [165.644201ms] +Sep 7 07:40:49.160: INFO: Created: latency-svc-mlfr6 +Sep 7 07:40:49.160: INFO: Got endpoints: latency-svc-lb98p [247.430412ms] +Sep 7 07:40:49.160: INFO: Got endpoints: latency-svc-f6xmd [192.708906ms] +Sep 7 07:40:49.160: INFO: Got endpoints: latency-svc-jpfld [247.785455ms] +Sep 7 07:40:49.160: INFO: Got endpoints: latency-svc-swzsd [247.714155ms] +Sep 7 07:40:49.162: INFO: Got endpoints: latency-svc-nmp5k [92.851321ms] +Sep 7 07:40:49.181: INFO: Created: latency-svc-nqlnr +Sep 7 07:40:49.184: INFO: Created: latency-svc-7pr59 +Sep 7 07:40:49.194: INFO: Got endpoints: latency-svc-7pr59 [79.530956ms] +Sep 7 07:40:49.194: INFO: Got endpoints: latency-svc-zkls9 [103.540382ms] +Sep 7 07:40:49.194: INFO: Got endpoints: latency-svc-mlfr6 [96.618987ms] +Sep 7 07:40:49.194: INFO: Got endpoints: latency-svc-nqlnr [73.922524ms] +Sep 7 07:40:49.203: INFO: Created: latency-svc-9gfcd +Sep 7 07:40:49.209: INFO: Got endpoints: latency-svc-9gfcd [94.567821ms] +Sep 7 07:40:49.218: INFO: Created: latency-svc-cqpsm +Sep 7 07:40:49.224: INFO: Got endpoints: latency-svc-cqpsm [104.776105ms] +Sep 7 07:40:49.279: INFO: Created: latency-svc-n64gv +Sep 7 07:40:49.284: INFO: Created: latency-svc-gtfzs +Sep 7 07:40:49.284: INFO: Created: latency-svc-qlrb8 +Sep 7 07:40:49.284: INFO: Created: latency-svc-sdctt +Sep 7 07:40:49.284: INFO: Created: latency-svc-mhlwk +Sep 7 07:40:49.294: INFO: Created: latency-svc-6tjkz +Sep 7 07:40:49.294: INFO: Created: latency-svc-5zf5n +Sep 7 07:40:49.295: INFO: Created: latency-svc-g6djw +Sep 7 07:40:49.295: INFO: Created: latency-svc-8vnzc +Sep 7 07:40:49.295: INFO: Created: latency-svc-j47kf +Sep 7 07:40:49.295: INFO: Created: latency-svc-tfwfl +Sep 7 07:40:49.295: INFO: Created: latency-svc-b2r8z +Sep 7 07:40:49.313: INFO: Got endpoints: latency-svc-qlrb8 [104.61881ms] +Sep 7 07:40:49.314: INFO: Created: latency-svc-7p8tf +Sep 7 07:40:49.314: INFO: Created: latency-svc-q54pc +Sep 7 07:40:49.314: INFO: Got endpoints: latency-svc-6tjkz [90.063599ms] +Sep 7 07:40:49.315: INFO: Created: latency-svc-dvx7t +Sep 7 07:40:49.325: INFO: Got endpoints: latency-svc-5zf5n [204.961338ms] +Sep 7 07:40:49.325: INFO: Got endpoints: latency-svc-n64gv [165.206872ms] +Sep 7 07:40:49.326: INFO: Got endpoints: latency-svc-g6djw [206.272007ms] +Sep 7 07:40:49.340: INFO: Got endpoints: latency-svc-j47kf [195.035455ms] +Sep 7 07:40:49.341: INFO: Got endpoints: latency-svc-8vnzc [201.262416ms] +Sep 7 07:40:49.348: INFO: Got endpoints: latency-svc-b2r8z [188.071891ms] +Sep 7 07:40:49.349: INFO: Got endpoints: latency-svc-tfwfl [188.416081ms] +Sep 7 07:40:49.349: INFO: Got endpoints: latency-svc-sdctt [155.285571ms] +Sep 7 07:40:49.354: INFO: Created: latency-svc-bdn7l +Sep 7 07:40:49.362: INFO: Got endpoints: latency-svc-mhlwk [168.092817ms] +Sep 7 07:40:49.367: INFO: Created: latency-svc-zl7c4 +Sep 7 07:40:49.376: INFO: Created: latency-svc-vkw28 +Sep 7 07:40:49.382: INFO: Created: latency-svc-kxhb8 +Sep 7 07:40:49.390: INFO: Created: latency-svc-jnktp +Sep 7 07:40:49.397: INFO: Created: latency-svc-dxgxl +Sep 7 07:40:49.402: INFO: Created: latency-svc-vmnh4 +Sep 7 07:40:49.412: INFO: Got endpoints: latency-svc-gtfzs [218.0951ms] +Sep 7 07:40:49.412: INFO: Created: latency-svc-jnlgv +Sep 7 07:40:49.422: INFO: Created: latency-svc-wkkgv +Sep 7 07:40:49.427: INFO: Created: latency-svc-mb9cw +Sep 7 07:40:49.433: INFO: Created: latency-svc-rc7dc +Sep 7 07:40:49.439: INFO: Created: latency-svc-9jprh +Sep 7 07:40:49.478: INFO: Got endpoints: latency-svc-dvx7t [316.09547ms] +Sep 7 07:40:49.497: INFO: Created: latency-svc-l2cwh +Sep 7 07:40:49.509: INFO: Got endpoints: latency-svc-7p8tf [348.744676ms] +Sep 7 07:40:49.521: INFO: Created: latency-svc-ltv2b +Sep 7 07:40:49.561: INFO: Got endpoints: latency-svc-q54pc [367.260499ms] +Sep 7 07:40:49.570: INFO: Created: latency-svc-4t54n +Sep 7 07:40:49.612: INFO: Got endpoints: latency-svc-bdn7l [298.255011ms] +Sep 7 07:40:49.622: INFO: Created: latency-svc-k55mb +Sep 7 07:40:49.661: INFO: Got endpoints: latency-svc-zl7c4 [346.855071ms] +Sep 7 07:40:49.669: INFO: Created: latency-svc-qr8cv +Sep 7 07:40:49.712: INFO: Got endpoints: latency-svc-vkw28 [387.530874ms] +Sep 7 07:40:49.725: INFO: Created: latency-svc-ppc6k +Sep 7 07:40:49.759: INFO: Got endpoints: latency-svc-kxhb8 [432.834787ms] +Sep 7 07:40:49.766: INFO: Created: latency-svc-nl99x +Sep 7 07:40:49.809: INFO: Got endpoints: latency-svc-jnktp [483.588502ms] +Sep 7 07:40:49.818: INFO: Created: latency-svc-6rnbx +Sep 7 07:40:49.862: INFO: Got endpoints: latency-svc-dxgxl [522.062183ms] +Sep 7 07:40:49.870: INFO: Created: latency-svc-m4thp +Sep 7 07:40:49.914: INFO: Got endpoints: latency-svc-vmnh4 [573.711417ms] +Sep 7 07:40:49.924: INFO: Created: latency-svc-fjfxm +Sep 7 07:40:49.962: INFO: Got endpoints: latency-svc-jnlgv [613.435494ms] +Sep 7 07:40:49.971: INFO: Created: latency-svc-z9tvx +Sep 7 07:40:50.014: INFO: Got endpoints: latency-svc-wkkgv [665.58329ms] +Sep 7 07:40:50.029: INFO: Created: latency-svc-2sfgj +Sep 7 07:40:50.067: INFO: Got endpoints: latency-svc-mb9cw [718.803086ms] +Sep 7 07:40:50.078: INFO: Created: latency-svc-45vzk +Sep 7 07:40:50.113: INFO: Got endpoints: latency-svc-rc7dc [751.096443ms] +Sep 7 07:40:50.122: INFO: Created: latency-svc-fk6cm +Sep 7 07:40:50.158: INFO: Got endpoints: latency-svc-9jprh [746.550303ms] +Sep 7 07:40:50.165: INFO: Created: latency-svc-vblkb +Sep 7 07:40:50.212: INFO: Got endpoints: latency-svc-l2cwh [733.613095ms] +Sep 7 07:40:50.223: INFO: Created: latency-svc-5vzhx +Sep 7 07:40:50.260: INFO: Got endpoints: latency-svc-ltv2b [750.98406ms] +Sep 7 07:40:50.273: INFO: Created: latency-svc-dlxwp +Sep 7 07:40:50.317: INFO: Got endpoints: latency-svc-4t54n [755.493137ms] +Sep 7 07:40:50.326: INFO: Created: latency-svc-h265h +Sep 7 07:40:50.359: INFO: Got endpoints: latency-svc-k55mb [747.392172ms] +Sep 7 07:40:50.367: INFO: Created: latency-svc-hnrwf +Sep 7 07:40:50.410: INFO: Got endpoints: latency-svc-qr8cv [748.629968ms] +Sep 7 07:40:50.423: INFO: Created: latency-svc-n798q +Sep 7 07:40:50.465: INFO: Got endpoints: latency-svc-ppc6k [752.326105ms] +Sep 7 07:40:50.484: INFO: Created: latency-svc-nr4gv +Sep 7 07:40:50.508: INFO: Got endpoints: latency-svc-nl99x [749.221127ms] +Sep 7 07:40:50.515: INFO: Created: latency-svc-kd8vz +Sep 7 07:40:50.559: INFO: Got endpoints: latency-svc-6rnbx [749.997777ms] +Sep 7 07:40:50.566: INFO: Created: latency-svc-6xcmx +Sep 7 07:40:50.611: INFO: Got endpoints: latency-svc-m4thp [748.828075ms] +Sep 7 07:40:50.621: INFO: Created: latency-svc-9v5bv +Sep 7 07:40:50.658: INFO: Got endpoints: latency-svc-fjfxm [743.929679ms] +Sep 7 07:40:50.670: INFO: Created: latency-svc-vn2dk +Sep 7 07:40:50.710: INFO: Got endpoints: latency-svc-z9tvx [748.288512ms] +Sep 7 07:40:50.718: INFO: Created: latency-svc-78zr7 +Sep 7 07:40:50.761: INFO: Got endpoints: latency-svc-2sfgj [746.399554ms] +Sep 7 07:40:50.771: INFO: Created: latency-svc-5p4ks +Sep 7 07:40:50.811: INFO: Got endpoints: latency-svc-45vzk [743.769813ms] +Sep 7 07:40:50.820: INFO: Created: latency-svc-9vbct +Sep 7 07:40:50.865: INFO: Got endpoints: latency-svc-fk6cm [751.599361ms] +Sep 7 07:40:50.892: INFO: Created: latency-svc-jxpws +Sep 7 07:40:50.914: INFO: Got endpoints: latency-svc-vblkb [755.807682ms] +Sep 7 07:40:50.924: INFO: Created: latency-svc-jwmvg +Sep 7 07:40:50.961: INFO: Got endpoints: latency-svc-5vzhx [749.268457ms] +Sep 7 07:40:50.986: INFO: Created: latency-svc-vztxx +Sep 7 07:40:51.014: INFO: Got endpoints: latency-svc-dlxwp [754.379695ms] +Sep 7 07:40:51.039: INFO: Created: latency-svc-2n2nx +Sep 7 07:40:51.068: INFO: Got endpoints: latency-svc-h265h [750.764795ms] +Sep 7 07:40:51.080: INFO: Created: latency-svc-zn4d4 +Sep 7 07:40:51.111: INFO: Got endpoints: latency-svc-hnrwf [751.752337ms] +Sep 7 07:40:51.120: INFO: Created: latency-svc-mzvl4 +Sep 7 07:40:51.162: INFO: Got endpoints: latency-svc-n798q [751.601949ms] +Sep 7 07:40:51.172: INFO: Created: latency-svc-84nx6 +Sep 7 07:40:51.216: INFO: Got endpoints: latency-svc-nr4gv [750.977992ms] +Sep 7 07:40:51.222: INFO: Created: latency-svc-vmqcw +Sep 7 07:40:51.259: INFO: Got endpoints: latency-svc-kd8vz [750.839882ms] +Sep 7 07:40:51.267: INFO: Created: latency-svc-ppzz2 +Sep 7 07:40:51.309: INFO: Got endpoints: latency-svc-6xcmx [750.393182ms] +Sep 7 07:40:51.319: INFO: Created: latency-svc-kf6p2 +Sep 7 07:40:51.359: INFO: Got endpoints: latency-svc-9v5bv [747.383227ms] +Sep 7 07:40:51.372: INFO: Created: latency-svc-bqc6r +Sep 7 07:40:51.408: INFO: Got endpoints: latency-svc-vn2dk [750.191875ms] +Sep 7 07:40:51.417: INFO: Created: latency-svc-98ptw +Sep 7 07:40:51.464: INFO: Got endpoints: latency-svc-78zr7 [753.340659ms] +Sep 7 07:40:51.472: INFO: Created: latency-svc-qm5p4 +Sep 7 07:40:51.515: INFO: Got endpoints: latency-svc-5p4ks [754.136496ms] +Sep 7 07:40:51.523: INFO: Created: latency-svc-l7xfs +Sep 7 07:40:51.559: INFO: Got endpoints: latency-svc-9vbct [747.712982ms] +Sep 7 07:40:51.575: INFO: Created: latency-svc-fcb4j +Sep 7 07:40:51.609: INFO: Got endpoints: latency-svc-jxpws [743.904515ms] +Sep 7 07:40:51.617: INFO: Created: latency-svc-n2rsn +Sep 7 07:40:51.662: INFO: Got endpoints: latency-svc-jwmvg [747.490564ms] +Sep 7 07:40:51.674: INFO: Created: latency-svc-2l6vb +Sep 7 07:40:51.709: INFO: Got endpoints: latency-svc-vztxx [748.042078ms] +Sep 7 07:40:51.719: INFO: Created: latency-svc-5j5t4 +Sep 7 07:40:51.758: INFO: Got endpoints: latency-svc-2n2nx [744.036519ms] +Sep 7 07:40:51.765: INFO: Created: latency-svc-2cslk +Sep 7 07:40:51.808: INFO: Got endpoints: latency-svc-zn4d4 [740.027997ms] +Sep 7 07:40:51.818: INFO: Created: latency-svc-6x99x +Sep 7 07:40:51.858: INFO: Got endpoints: latency-svc-mzvl4 [747.188367ms] +Sep 7 07:40:51.873: INFO: Created: latency-svc-c4c79 +Sep 7 07:40:51.909: INFO: Got endpoints: latency-svc-84nx6 [747.577063ms] +Sep 7 07:40:51.920: INFO: Created: latency-svc-g287t +Sep 7 07:40:51.961: INFO: Got endpoints: latency-svc-vmqcw [744.923225ms] +Sep 7 07:40:51.974: INFO: Created: latency-svc-pqwz2 +Sep 7 07:40:52.011: INFO: Got endpoints: latency-svc-ppzz2 [752.089788ms] +Sep 7 07:40:52.022: INFO: Created: latency-svc-nvlg7 +Sep 7 07:40:52.064: INFO: Got endpoints: latency-svc-kf6p2 [755.137239ms] +Sep 7 07:40:52.074: INFO: Created: latency-svc-dhjw7 +Sep 7 07:40:52.112: INFO: Got endpoints: latency-svc-bqc6r [753.535962ms] +Sep 7 07:40:52.125: INFO: Created: latency-svc-f7tjz +Sep 7 07:40:52.159: INFO: Got endpoints: latency-svc-98ptw [750.859905ms] +Sep 7 07:40:52.173: INFO: Created: latency-svc-kvpcn +Sep 7 07:40:52.208: INFO: Got endpoints: latency-svc-qm5p4 [744.609009ms] +Sep 7 07:40:52.219: INFO: Created: latency-svc-z8bhg +Sep 7 07:40:52.258: INFO: Got endpoints: latency-svc-l7xfs [743.154616ms] +Sep 7 07:40:52.268: INFO: Created: latency-svc-4nrd9 +Sep 7 07:40:52.312: INFO: Got endpoints: latency-svc-fcb4j [753.157608ms] +Sep 7 07:40:52.323: INFO: Created: latency-svc-j2k4j +Sep 7 07:40:52.359: INFO: Got endpoints: latency-svc-n2rsn [750.445788ms] +Sep 7 07:40:52.392: INFO: Created: latency-svc-bfcn8 +Sep 7 07:40:52.412: INFO: Got endpoints: latency-svc-2l6vb [750.440113ms] +Sep 7 07:40:52.425: INFO: Created: latency-svc-b9fnf +Sep 7 07:40:52.462: INFO: Got endpoints: latency-svc-5j5t4 [752.84845ms] +Sep 7 07:40:52.475: INFO: Created: latency-svc-5htxl +Sep 7 07:40:52.509: INFO: Got endpoints: latency-svc-2cslk [750.925793ms] +Sep 7 07:40:52.519: INFO: Created: latency-svc-wp9cv +Sep 7 07:40:52.558: INFO: Got endpoints: latency-svc-6x99x [750.499668ms] +Sep 7 07:40:52.574: INFO: Created: latency-svc-kmxhs +Sep 7 07:40:52.617: INFO: Got endpoints: latency-svc-c4c79 [759.301045ms] +Sep 7 07:40:52.627: INFO: Created: latency-svc-7j7j9 +Sep 7 07:40:52.659: INFO: Got endpoints: latency-svc-g287t [749.863917ms] +Sep 7 07:40:52.667: INFO: Created: latency-svc-wsfnn +Sep 7 07:40:52.707: INFO: Got endpoints: latency-svc-pqwz2 [746.150997ms] +Sep 7 07:40:52.721: INFO: Created: latency-svc-ff7pk +Sep 7 07:40:52.761: INFO: Got endpoints: latency-svc-nvlg7 [750.079395ms] +Sep 7 07:40:52.780: INFO: Created: latency-svc-mfrsf +Sep 7 07:40:52.810: INFO: Got endpoints: latency-svc-dhjw7 [746.084312ms] +Sep 7 07:40:52.822: INFO: Created: latency-svc-stnnb +Sep 7 07:40:52.859: INFO: Got endpoints: latency-svc-f7tjz [746.74005ms] +Sep 7 07:40:52.876: INFO: Created: latency-svc-lkfq8 +Sep 7 07:40:52.909: INFO: Got endpoints: latency-svc-kvpcn [749.918473ms] +Sep 7 07:40:52.922: INFO: Created: latency-svc-dqff6 +Sep 7 07:40:52.958: INFO: Got endpoints: latency-svc-z8bhg [750.036663ms] +Sep 7 07:40:52.965: INFO: Created: latency-svc-pkxww +Sep 7 07:40:53.013: INFO: Got endpoints: latency-svc-4nrd9 [755.1789ms] +Sep 7 07:40:53.021: INFO: Created: latency-svc-kbk8x +Sep 7 07:40:53.059: INFO: Got endpoints: latency-svc-j2k4j [747.059833ms] +Sep 7 07:40:53.071: INFO: Created: latency-svc-whzkw +Sep 7 07:40:53.108: INFO: Got endpoints: latency-svc-bfcn8 [748.376511ms] +Sep 7 07:40:53.119: INFO: Created: latency-svc-csbn2 +Sep 7 07:40:53.164: INFO: Got endpoints: latency-svc-b9fnf [752.101251ms] +Sep 7 07:40:53.175: INFO: Created: latency-svc-g777d +Sep 7 07:40:53.215: INFO: Got endpoints: latency-svc-5htxl [752.335667ms] +Sep 7 07:40:53.224: INFO: Created: latency-svc-b6pjw +Sep 7 07:40:53.260: INFO: Got endpoints: latency-svc-wp9cv [750.258157ms] +Sep 7 07:40:53.279: INFO: Created: latency-svc-lbdjs +Sep 7 07:40:53.309: INFO: Got endpoints: latency-svc-kmxhs [751.220379ms] +Sep 7 07:40:53.318: INFO: Created: latency-svc-cvfrj +Sep 7 07:40:53.360: INFO: Got endpoints: latency-svc-7j7j9 [742.571472ms] +Sep 7 07:40:53.368: INFO: Created: latency-svc-ww6cd +Sep 7 07:40:53.415: INFO: Got endpoints: latency-svc-wsfnn [756.127171ms] +Sep 7 07:40:53.433: INFO: Created: latency-svc-prz5c +Sep 7 07:40:53.461: INFO: Got endpoints: latency-svc-ff7pk [753.986641ms] +Sep 7 07:40:53.468: INFO: Created: latency-svc-4rhrd +Sep 7 07:40:53.507: INFO: Got endpoints: latency-svc-mfrsf [745.118186ms] +Sep 7 07:40:53.519: INFO: Created: latency-svc-wfr2f +Sep 7 07:40:53.559: INFO: Got endpoints: latency-svc-stnnb [748.924401ms] +Sep 7 07:40:53.570: INFO: Created: latency-svc-hwwbf +Sep 7 07:40:53.609: INFO: Got endpoints: latency-svc-lkfq8 [750.079389ms] +Sep 7 07:40:53.617: INFO: Created: latency-svc-rltrx +Sep 7 07:40:53.659: INFO: Got endpoints: latency-svc-dqff6 [749.22753ms] +Sep 7 07:40:53.668: INFO: Created: latency-svc-l4rff +Sep 7 07:40:53.713: INFO: Got endpoints: latency-svc-pkxww [754.901714ms] +Sep 7 07:40:53.720: INFO: Created: latency-svc-klqk2 +Sep 7 07:40:53.765: INFO: Got endpoints: latency-svc-kbk8x [751.050352ms] +Sep 7 07:40:53.772: INFO: Created: latency-svc-kgqjr +Sep 7 07:40:53.813: INFO: Got endpoints: latency-svc-whzkw [753.405715ms] +Sep 7 07:40:53.828: INFO: Created: latency-svc-qmldl +Sep 7 07:40:53.859: INFO: Got endpoints: latency-svc-csbn2 [750.882332ms] +Sep 7 07:40:53.894: INFO: Created: latency-svc-txcjh +Sep 7 07:40:53.909: INFO: Got endpoints: latency-svc-g777d [745.016617ms] +Sep 7 07:40:53.917: INFO: Created: latency-svc-c2lgm +Sep 7 07:40:53.961: INFO: Got endpoints: latency-svc-b6pjw [746.63654ms] +Sep 7 07:40:53.973: INFO: Created: latency-svc-pgpxd +Sep 7 07:40:54.010: INFO: Got endpoints: latency-svc-lbdjs [750.854015ms] +Sep 7 07:40:54.024: INFO: Created: latency-svc-pzg7h +Sep 7 07:40:54.060: INFO: Got endpoints: latency-svc-cvfrj [750.78838ms] +Sep 7 07:40:54.071: INFO: Created: latency-svc-l6nqc +Sep 7 07:40:54.108: INFO: Got endpoints: latency-svc-ww6cd [748.255243ms] +Sep 7 07:40:54.116: INFO: Created: latency-svc-vddkn +Sep 7 07:40:54.168: INFO: Got endpoints: latency-svc-prz5c [752.571453ms] +Sep 7 07:40:54.176: INFO: Created: latency-svc-wg4nw +Sep 7 07:40:54.211: INFO: Got endpoints: latency-svc-4rhrd [749.937001ms] +Sep 7 07:40:54.220: INFO: Created: latency-svc-kb844 +Sep 7 07:40:54.261: INFO: Got endpoints: latency-svc-wfr2f [754.233862ms] +Sep 7 07:40:54.271: INFO: Created: latency-svc-mf2p6 +Sep 7 07:40:54.310: INFO: Got endpoints: latency-svc-hwwbf [750.9601ms] +Sep 7 07:40:54.321: INFO: Created: latency-svc-s8t69 +Sep 7 07:40:54.360: INFO: Got endpoints: latency-svc-rltrx [750.386955ms] +Sep 7 07:40:54.369: INFO: Created: latency-svc-btsx2 +Sep 7 07:40:54.410: INFO: Got endpoints: latency-svc-l4rff [751.592957ms] +Sep 7 07:40:54.418: INFO: Created: latency-svc-9wrb7 +Sep 7 07:40:54.459: INFO: Got endpoints: latency-svc-klqk2 [745.998576ms] +Sep 7 07:40:54.470: INFO: Created: latency-svc-6psf6 +Sep 7 07:40:54.517: INFO: Got endpoints: latency-svc-kgqjr [752.67952ms] +Sep 7 07:40:54.527: INFO: Created: latency-svc-r2nrd +Sep 7 07:40:54.559: INFO: Got endpoints: latency-svc-qmldl [746.520894ms] +Sep 7 07:40:54.572: INFO: Created: latency-svc-2tfdg +Sep 7 07:40:54.611: INFO: Got endpoints: latency-svc-txcjh [752.060708ms] +Sep 7 07:40:54.619: INFO: Created: latency-svc-n5ptd +Sep 7 07:40:54.658: INFO: Got endpoints: latency-svc-c2lgm [748.30907ms] +Sep 7 07:40:54.667: INFO: Created: latency-svc-9j526 +Sep 7 07:40:54.711: INFO: Got endpoints: latency-svc-pgpxd [749.564888ms] +Sep 7 07:40:54.722: INFO: Created: latency-svc-jn7rf +Sep 7 07:40:54.761: INFO: Got endpoints: latency-svc-pzg7h [750.656579ms] +Sep 7 07:40:54.772: INFO: Created: latency-svc-h6sxn +Sep 7 07:40:54.810: INFO: Got endpoints: latency-svc-l6nqc [749.522281ms] +Sep 7 07:40:54.823: INFO: Created: latency-svc-scc2q +Sep 7 07:40:54.862: INFO: Got endpoints: latency-svc-vddkn [753.908413ms] +Sep 7 07:40:54.876: INFO: Created: latency-svc-f4nnr +Sep 7 07:40:54.913: INFO: Got endpoints: latency-svc-wg4nw [745.359054ms] +Sep 7 07:40:54.932: INFO: Created: latency-svc-fsgvn +Sep 7 07:40:54.961: INFO: Got endpoints: latency-svc-kb844 [749.709729ms] +Sep 7 07:40:54.975: INFO: Created: latency-svc-c9lbv +Sep 7 07:40:55.014: INFO: Got endpoints: latency-svc-mf2p6 [753.059277ms] +Sep 7 07:40:55.026: INFO: Created: latency-svc-8wsc2 +Sep 7 07:40:55.061: INFO: Got endpoints: latency-svc-s8t69 [750.959817ms] +Sep 7 07:40:55.081: INFO: Created: latency-svc-bp48s +Sep 7 07:40:55.108: INFO: Got endpoints: latency-svc-btsx2 [748.683859ms] +Sep 7 07:40:55.116: INFO: Created: latency-svc-wqr9b +Sep 7 07:40:55.162: INFO: Got endpoints: latency-svc-9wrb7 [751.821399ms] +Sep 7 07:40:55.175: INFO: Created: latency-svc-wkqbv +Sep 7 07:40:55.211: INFO: Got endpoints: latency-svc-6psf6 [751.995503ms] +Sep 7 07:40:55.221: INFO: Created: latency-svc-b6mxl +Sep 7 07:40:55.260: INFO: Got endpoints: latency-svc-r2nrd [743.083756ms] +Sep 7 07:40:55.268: INFO: Created: latency-svc-2m68f +Sep 7 07:40:55.319: INFO: Got endpoints: latency-svc-2tfdg [759.176211ms] +Sep 7 07:40:55.327: INFO: Created: latency-svc-bv8zx +Sep 7 07:40:55.360: INFO: Got endpoints: latency-svc-n5ptd [748.723436ms] +Sep 7 07:40:55.367: INFO: Created: latency-svc-2d768 +Sep 7 07:40:55.412: INFO: Got endpoints: latency-svc-9j526 [754.089529ms] +Sep 7 07:40:55.421: INFO: Created: latency-svc-fq4p9 +Sep 7 07:40:55.466: INFO: Got endpoints: latency-svc-jn7rf [755.226162ms] +Sep 7 07:40:55.474: INFO: Created: latency-svc-4tt7f +Sep 7 07:40:55.512: INFO: Got endpoints: latency-svc-h6sxn [750.545314ms] +Sep 7 07:40:55.530: INFO: Created: latency-svc-5glz6 +Sep 7 07:40:55.572: INFO: Got endpoints: latency-svc-scc2q [762.479278ms] +Sep 7 07:40:55.581: INFO: Created: latency-svc-v95vn +Sep 7 07:40:55.619: INFO: Got endpoints: latency-svc-f4nnr [756.520861ms] +Sep 7 07:40:55.632: INFO: Created: latency-svc-fczr9 +Sep 7 07:40:55.660: INFO: Got endpoints: latency-svc-fsgvn [746.60053ms] +Sep 7 07:40:55.668: INFO: Created: latency-svc-zjnhm +Sep 7 07:40:55.714: INFO: Got endpoints: latency-svc-c9lbv [753.748034ms] +Sep 7 07:40:55.728: INFO: Created: latency-svc-prwf5 +Sep 7 07:40:55.761: INFO: Got endpoints: latency-svc-8wsc2 [746.402336ms] +Sep 7 07:40:55.768: INFO: Created: latency-svc-8s7th +Sep 7 07:40:55.809: INFO: Got endpoints: latency-svc-bp48s [747.881682ms] +Sep 7 07:40:55.826: INFO: Created: latency-svc-bchql +Sep 7 07:40:55.858: INFO: Got endpoints: latency-svc-wqr9b [749.442903ms] +Sep 7 07:40:55.867: INFO: Created: latency-svc-8gclq +Sep 7 07:40:55.911: INFO: Got endpoints: latency-svc-wkqbv [749.290736ms] +Sep 7 07:40:55.921: INFO: Created: latency-svc-4nvht +Sep 7 07:40:55.960: INFO: Got endpoints: latency-svc-b6mxl [748.926115ms] +Sep 7 07:40:55.973: INFO: Created: latency-svc-w6kjf +Sep 7 07:40:56.013: INFO: Got endpoints: latency-svc-2m68f [752.650537ms] +Sep 7 07:40:56.024: INFO: Created: latency-svc-kgnxc +Sep 7 07:40:56.059: INFO: Got endpoints: latency-svc-bv8zx [740.698048ms] +Sep 7 07:40:56.068: INFO: Created: latency-svc-x9f25 +Sep 7 07:40:56.110: INFO: Got endpoints: latency-svc-2d768 [750.070145ms] +Sep 7 07:40:56.120: INFO: Created: latency-svc-5tclw +Sep 7 07:40:56.160: INFO: Got endpoints: latency-svc-fq4p9 [747.824713ms] +Sep 7 07:40:56.168: INFO: Created: latency-svc-hm4sc +Sep 7 07:40:56.210: INFO: Got endpoints: latency-svc-4tt7f [743.5416ms] +Sep 7 07:40:56.224: INFO: Created: latency-svc-n6flc +Sep 7 07:40:56.262: INFO: Got endpoints: latency-svc-5glz6 [749.83275ms] +Sep 7 07:40:56.271: INFO: Created: latency-svc-89m94 +Sep 7 07:40:56.310: INFO: Got endpoints: latency-svc-v95vn [738.13178ms] +Sep 7 07:40:56.322: INFO: Created: latency-svc-b97s4 +Sep 7 07:40:56.358: INFO: Got endpoints: latency-svc-fczr9 [739.098783ms] +Sep 7 07:40:56.368: INFO: Created: latency-svc-k55nm +Sep 7 07:40:56.409: INFO: Got endpoints: latency-svc-zjnhm [749.392974ms] +Sep 7 07:40:56.423: INFO: Created: latency-svc-fngkj +Sep 7 07:40:56.461: INFO: Got endpoints: latency-svc-prwf5 [746.962181ms] +Sep 7 07:40:56.472: INFO: Created: latency-svc-gjpxz +Sep 7 07:40:56.511: INFO: Got endpoints: latency-svc-8s7th [750.420883ms] +Sep 7 07:40:56.519: INFO: Created: latency-svc-jbdkg +Sep 7 07:40:56.560: INFO: Got endpoints: latency-svc-bchql [751.068359ms] +Sep 7 07:40:56.577: INFO: Created: latency-svc-pnchh +Sep 7 07:40:56.610: INFO: Got endpoints: latency-svc-8gclq [752.546017ms] +Sep 7 07:40:56.620: INFO: Created: latency-svc-nz9xj +Sep 7 07:40:56.662: INFO: Got endpoints: latency-svc-4nvht [750.980566ms] +Sep 7 07:40:56.675: INFO: Created: latency-svc-wf8tr +Sep 7 07:40:56.711: INFO: Got endpoints: latency-svc-w6kjf [750.969555ms] +Sep 7 07:40:56.719: INFO: Created: latency-svc-b4zxv +Sep 7 07:40:56.758: INFO: Got endpoints: latency-svc-kgnxc [744.965473ms] +Sep 7 07:40:56.808: INFO: Got endpoints: latency-svc-x9f25 [749.030697ms] +Sep 7 07:40:56.862: INFO: Got endpoints: latency-svc-5tclw [752.764801ms] +Sep 7 07:40:56.909: INFO: Got endpoints: latency-svc-hm4sc [749.531445ms] +Sep 7 07:40:56.967: INFO: Got endpoints: latency-svc-n6flc [757.150281ms] +Sep 7 07:40:57.020: INFO: Got endpoints: latency-svc-89m94 [758.850192ms] +Sep 7 07:40:57.066: INFO: Got endpoints: latency-svc-b97s4 [756.028726ms] +Sep 7 07:40:57.142: INFO: Got endpoints: latency-svc-k55nm [784.38087ms] +Sep 7 07:40:57.165: INFO: Got endpoints: latency-svc-fngkj [755.851798ms] +Sep 7 07:40:57.211: INFO: Got endpoints: latency-svc-gjpxz [749.987241ms] +Sep 7 07:40:57.265: INFO: Got endpoints: latency-svc-jbdkg [754.237614ms] +Sep 7 07:40:57.313: INFO: Got endpoints: latency-svc-pnchh [752.712061ms] +Sep 7 07:40:57.361: INFO: Got endpoints: latency-svc-nz9xj [750.913296ms] +Sep 7 07:40:57.412: INFO: Got endpoints: latency-svc-wf8tr [749.907175ms] +Sep 7 07:40:57.458: INFO: Got endpoints: latency-svc-b4zxv [746.463656ms] +Sep 7 07:40:57.458: INFO: Latencies: [29.058313ms 30.03696ms 46.457944ms 54.556755ms 67.355053ms 73.922524ms 79.530956ms 90.063599ms 92.851321ms 94.567821ms 96.618987ms 103.540382ms 104.61881ms 104.776105ms 155.285571ms 156.456054ms 160.954779ms 165.206872ms 165.644201ms 168.092817ms 177.39505ms 178.092784ms 184.722983ms 188.071891ms 188.416081ms 192.708906ms 195.035455ms 196.869834ms 201.211097ms 201.262416ms 201.595707ms 204.961338ms 206.272007ms 207.291099ms 207.619525ms 218.0951ms 247.430412ms 247.714155ms 247.785455ms 298.255011ms 316.09547ms 346.855071ms 348.744676ms 367.260499ms 387.530874ms 432.834787ms 483.588502ms 522.062183ms 573.711417ms 613.435494ms 665.58329ms 718.803086ms 733.613095ms 738.13178ms 739.098783ms 740.027997ms 740.698048ms 742.571472ms 743.083756ms 743.154616ms 743.5416ms 743.769813ms 743.904515ms 743.929679ms 744.036519ms 744.609009ms 744.923225ms 744.965473ms 745.016617ms 745.118186ms 745.359054ms 745.998576ms 746.084312ms 746.150997ms 746.399554ms 746.402336ms 746.463656ms 746.520894ms 746.550303ms 746.60053ms 746.63654ms 746.74005ms 746.962181ms 747.059833ms 747.188367ms 747.383227ms 747.392172ms 747.490564ms 747.577063ms 747.712982ms 747.824713ms 747.881682ms 748.042078ms 748.255243ms 748.288512ms 748.30907ms 748.376511ms 748.629968ms 748.683859ms 748.723436ms 748.828075ms 748.924401ms 748.926115ms 749.030697ms 749.221127ms 749.22753ms 749.268457ms 749.290736ms 749.392974ms 749.442903ms 749.522281ms 749.531445ms 749.564888ms 749.709729ms 749.83275ms 749.863917ms 749.907175ms 749.918473ms 749.937001ms 749.987241ms 749.997777ms 750.036663ms 750.070145ms 750.079389ms 750.079395ms 750.191875ms 750.258157ms 750.386955ms 750.393182ms 750.420883ms 750.440113ms 750.445788ms 750.499668ms 750.545314ms 750.656579ms 750.764795ms 750.78838ms 750.839882ms 750.854015ms 750.859905ms 750.882332ms 750.913296ms 750.925793ms 750.959817ms 750.9601ms 750.969555ms 750.977992ms 750.980566ms 750.98406ms 751.050352ms 751.068359ms 751.096443ms 751.220379ms 751.592957ms 751.599361ms 751.601949ms 751.752337ms 751.821399ms 751.995503ms 752.060708ms 752.089788ms 752.101251ms 752.326105ms 752.335667ms 752.546017ms 752.571453ms 752.650537ms 752.67952ms 752.712061ms 752.764801ms 752.84845ms 753.059277ms 753.157608ms 753.340659ms 753.405715ms 753.535962ms 753.748034ms 753.908413ms 753.986641ms 754.089529ms 754.136496ms 754.233862ms 754.237614ms 754.379695ms 754.901714ms 755.137239ms 755.1789ms 755.226162ms 755.493137ms 755.807682ms 755.851798ms 756.028726ms 756.127171ms 756.520861ms 757.150281ms 758.850192ms 759.176211ms 759.301045ms 762.479278ms 784.38087ms] +Sep 7 07:40:57.458: INFO: 50 %ile: 748.828075ms +Sep 7 07:40:57.458: INFO: 90 %ile: 754.136496ms +Sep 7 07:40:57.458: INFO: 99 %ile: 762.479278ms +Sep 7 07:40:57.458: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + test/e2e/framework/framework.go:188 +Sep 7 07:40:57.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-2115" for this suite. + +• [SLOW TEST:11.752 seconds] +[sig-network] Service endpoints latency +test/e2e/network/common/framework.go:23 + should not be very high [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":356,"completed":5,"skipped":78,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:40:57.470: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 07:40:57.506: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d5b3b25-51de-4f26-8ecc-c4afb8bd2d84" in namespace "downward-api-1669" to be "Succeeded or Failed" +Sep 7 07:40:57.513: INFO: Pod "downwardapi-volume-0d5b3b25-51de-4f26-8ecc-c4afb8bd2d84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.658795ms +Sep 7 07:40:59.521: INFO: Pod "downwardapi-volume-0d5b3b25-51de-4f26-8ecc-c4afb8bd2d84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014213724s +Sep 7 07:41:01.525: INFO: Pod "downwardapi-volume-0d5b3b25-51de-4f26-8ecc-c4afb8bd2d84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018844825s +STEP: Saw pod success +Sep 7 07:41:01.525: INFO: Pod "downwardapi-volume-0d5b3b25-51de-4f26-8ecc-c4afb8bd2d84" satisfied condition "Succeeded or Failed" +Sep 7 07:41:01.527: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-0d5b3b25-51de-4f26-8ecc-c4afb8bd2d84 container client-container: +STEP: delete the pod +Sep 7 07:41:01.548: INFO: Waiting for pod downwardapi-volume-0d5b3b25-51de-4f26-8ecc-c4afb8bd2d84 to disappear +Sep 7 07:41:01.549: INFO: Pod downwardapi-volume-0d5b3b25-51de-4f26-8ecc-c4afb8bd2d84 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 07:41:01.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1669" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":356,"completed":6,"skipped":87,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:41:01.554: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide DNS for services [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5541.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5541.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5541.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5541.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5541.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5541.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5541.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5541.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5541.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5541.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 44.34.68.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.68.34.44_udp@PTR;check="$$(dig +tcp +noall +answer +search 44.34.68.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.68.34.44_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5541.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5541.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5541.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5541.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5541.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5541.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5541.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5541.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5541.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5541.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 44.34.68.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.68.34.44_udp@PTR;check="$$(dig +tcp +noall +answer +search 44.34.68.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.68.34.44_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Sep 7 07:41:23.643: INFO: Unable to read wheezy_udp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:23.655: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:23.668: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:23.674: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:23.685: INFO: Unable to read jessie_udp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:23.687: INFO: Unable to read jessie_tcp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:23.689: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:23.692: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:23.701: INFO: Lookups using dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39 failed for: [wheezy_udp@dns-test-service.dns-5541.svc.cluster.local wheezy_tcp@dns-test-service.dns-5541.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local jessie_udp@dns-test-service.dns-5541.svc.cluster.local jessie_tcp@dns-test-service.dns-5541.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local] + +Sep 7 07:41:28.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:28.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:28.713: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:28.715: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:28.726: INFO: Unable to read jessie_udp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:28.728: INFO: Unable to read jessie_tcp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:28.731: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:28.734: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:28.742: INFO: Lookups using dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39 failed for: [wheezy_udp@dns-test-service.dns-5541.svc.cluster.local wheezy_tcp@dns-test-service.dns-5541.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local jessie_udp@dns-test-service.dns-5541.svc.cluster.local jessie_tcp@dns-test-service.dns-5541.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local] + +Sep 7 07:41:33.710: INFO: Unable to read wheezy_udp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:33.713: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:33.716: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:33.719: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:33.730: INFO: Unable to read jessie_udp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:33.733: INFO: Unable to read jessie_tcp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:33.736: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:33.739: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:33.750: INFO: Lookups using dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39 failed for: [wheezy_udp@dns-test-service.dns-5541.svc.cluster.local wheezy_tcp@dns-test-service.dns-5541.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local jessie_udp@dns-test-service.dns-5541.svc.cluster.local jessie_tcp@dns-test-service.dns-5541.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5541.svc.cluster.local] + +Sep 7 07:41:38.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:38.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:38.726: INFO: Unable to read jessie_udp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:38.729: INFO: Unable to read jessie_tcp@dns-test-service.dns-5541.svc.cluster.local from pod dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39: the server could not find the requested resource (get pods dns-test-974dc083-9918-430c-80f9-68452112fb39) +Sep 7 07:41:38.744: INFO: Lookups using dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39 failed for: [wheezy_udp@dns-test-service.dns-5541.svc.cluster.local wheezy_tcp@dns-test-service.dns-5541.svc.cluster.local jessie_udp@dns-test-service.dns-5541.svc.cluster.local jessie_tcp@dns-test-service.dns-5541.svc.cluster.local] + +Sep 7 07:41:43.744: INFO: DNS probes using dns-5541/dns-test-974dc083-9918-430c-80f9-68452112fb39 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:188 +Sep 7 07:41:43.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5541" for this suite. + +• [SLOW TEST:42.363 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for services [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":356,"completed":7,"skipped":97,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:41:43.917: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pod +Sep 7 07:41:43.985: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:188 +Sep 7 07:41:51.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-9478" for this suite. + +• [SLOW TEST:7.803 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":356,"completed":8,"skipped":121,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:41:51.721: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 +Sep 7 07:41:51.743: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Sep 7 07:41:51.755: INFO: Waiting for terminating namespaces to be deleted... +Sep 7 07:41:51.759: INFO: +Logging pods the apiserver thinks is on node 172.31.51.96 before test +Sep 7 07:41:51.767: INFO: pod-init-241fb30e-a2c1-4fb2-9f50-7dc2026343cb from init-container-9478 started at 2022-09-07 07:41:44 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.767: INFO: Container run1 ready: false, restart count 0 +Sep 7 07:41:51.767: INFO: calico-node-g8tpr from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.767: INFO: Container calico-node ready: true, restart count 0 +Sep 7 07:41:51.767: INFO: node-local-dns-8rwpt from kube-system started at 2022-09-07 07:27:42 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.767: INFO: Container node-cache ready: true, restart count 0 +Sep 7 07:41:51.767: INFO: sonobuoy from sonobuoy started at 2022-09-07 07:39:19 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.767: INFO: Container kube-sonobuoy ready: true, restart count 0 +Sep 7 07:41:51.767: INFO: sonobuoy-e2e-job-2f855b96e04a42ee from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 07:41:51.767: INFO: Container e2e ready: true, restart count 0 +Sep 7 07:41:51.767: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 07:41:51.767: INFO: sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-kstch from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 07:41:51.767: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 07:41:51.767: INFO: Container systemd-logs ready: true, restart count 0 +Sep 7 07:41:51.767: INFO: +Logging pods the apiserver thinks is on node 172.31.51.97 before test +Sep 7 07:41:51.773: INFO: calico-kube-controllers-5c8bb696bb-tvl2c from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.773: INFO: Container calico-kube-controllers ready: true, restart count 0 +Sep 7 07:41:51.773: INFO: calico-node-d87kb from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.773: INFO: Container calico-node ready: true, restart count 0 +Sep 7 07:41:51.773: INFO: coredns-84b58f6b4-xcj7z from kube-system started at 2022-09-07 07:27:41 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.773: INFO: Container coredns ready: true, restart count 0 +Sep 7 07:41:51.773: INFO: dashboard-metrics-scraper-864d79d497-bchwd from kube-system started at 2022-09-07 07:27:46 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.773: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Sep 7 07:41:51.773: INFO: kubernetes-dashboard-5fc74cf5c6-bsp7p from kube-system started at 2022-09-07 07:27:46 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.773: INFO: Container kubernetes-dashboard ready: true, restart count 0 +Sep 7 07:41:51.773: INFO: metrics-server-69797698d4-hndhm from kube-system started at 2022-09-07 07:27:43 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.773: INFO: Container metrics-server ready: true, restart count 0 +Sep 7 07:41:51.773: INFO: node-local-dns-28994 from kube-system started at 2022-09-07 07:27:42 +0000 UTC (1 container statuses recorded) +Sep 7 07:41:51.773: INFO: Container node-cache ready: true, restart count 0 +Sep 7 07:41:51.773: INFO: sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-svvzn from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 07:41:51.773: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 07:41:51.773: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + test/e2e/framework/framework.go:652 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-0c4bd4ff-c903-40f0-bbcf-5b4786daef69 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-0c4bd4ff-c903-40f0-bbcf-5b4786daef69 off the node 172.31.51.96 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-0c4bd4ff-c903-40f0-bbcf-5b4786daef69 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:188 +Sep 7 07:41:55.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-2321" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":356,"completed":9,"skipped":132,"failed":0} +SSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:41:55.921: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test substitution in container's command +Sep 7 07:41:56.167: INFO: Waiting up to 5m0s for pod "var-expansion-3e19dce1-6467-4001-960a-60975b423f05" in namespace "var-expansion-6987" to be "Succeeded or Failed" +Sep 7 07:41:56.211: INFO: Pod "var-expansion-3e19dce1-6467-4001-960a-60975b423f05": Phase="Pending", Reason="", readiness=false. Elapsed: 44.098735ms +Sep 7 07:41:58.220: INFO: Pod "var-expansion-3e19dce1-6467-4001-960a-60975b423f05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052210892s +Sep 7 07:42:00.226: INFO: Pod "var-expansion-3e19dce1-6467-4001-960a-60975b423f05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058826648s +Sep 7 07:42:02.245: INFO: Pod "var-expansion-3e19dce1-6467-4001-960a-60975b423f05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077172949s +STEP: Saw pod success +Sep 7 07:42:02.245: INFO: Pod "var-expansion-3e19dce1-6467-4001-960a-60975b423f05" satisfied condition "Succeeded or Failed" +Sep 7 07:42:02.259: INFO: Trying to get logs from node 172.31.51.96 pod var-expansion-3e19dce1-6467-4001-960a-60975b423f05 container dapi-container: +STEP: delete the pod +Sep 7 07:42:02.296: INFO: Waiting for pod var-expansion-3e19dce1-6467-4001-960a-60975b423f05 to disappear +Sep 7 07:42:02.306: INFO: Pod var-expansion-3e19dce1-6467-4001-960a-60975b423f05 no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:188 +Sep 7 07:42:02.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-6987" for this suite. + +• [SLOW TEST:6.409 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":356,"completed":10,"skipped":137,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:42:02.331: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 07:42:02.382: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48e3fedc-5ebc-4b27-98bd-3fe30948661a" in namespace "projected-2970" to be "Succeeded or Failed" +Sep 7 07:42:02.389: INFO: Pod "downwardapi-volume-48e3fedc-5ebc-4b27-98bd-3fe30948661a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.846854ms +Sep 7 07:42:04.400: INFO: Pod "downwardapi-volume-48e3fedc-5ebc-4b27-98bd-3fe30948661a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017834484s +Sep 7 07:42:06.408: INFO: Pod "downwardapi-volume-48e3fedc-5ebc-4b27-98bd-3fe30948661a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025422421s +Sep 7 07:42:08.418: INFO: Pod "downwardapi-volume-48e3fedc-5ebc-4b27-98bd-3fe30948661a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035431664s +STEP: Saw pod success +Sep 7 07:42:08.418: INFO: Pod "downwardapi-volume-48e3fedc-5ebc-4b27-98bd-3fe30948661a" satisfied condition "Succeeded or Failed" +Sep 7 07:42:08.421: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-48e3fedc-5ebc-4b27-98bd-3fe30948661a container client-container: +STEP: delete the pod +Sep 7 07:42:08.439: INFO: Waiting for pod downwardapi-volume-48e3fedc-5ebc-4b27-98bd-3fe30948661a to disappear +Sep 7 07:42:08.440: INFO: Pod downwardapi-volume-48e3fedc-5ebc-4b27-98bd-3fe30948661a no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 07:42:08.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2970" for this suite. + +• [SLOW TEST:6.117 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":356,"completed":11,"skipped":168,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:42:08.448: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pdb that targets all three pods in a test replica set +STEP: Waiting for the pdb to be processed +STEP: First trying to evict a pod which shouldn't be evictable +STEP: Waiting for all pods to be running +Sep 7 07:42:10.579: INFO: pods: 0 < 3 +Sep 7 07:42:12.587: INFO: running pods: 0 < 3 +STEP: locating a running pod +STEP: Updating the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +STEP: Waiting for the pdb to observed all healthy pods +STEP: Patching the pdb to disallow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Sep 7 07:42:18.763: INFO: running pods: 2 < 3 +STEP: locating a running pod +STEP: Deleting the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be deleted +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:188 +Sep 7 07:42:20.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-5762" for this suite. + +• [SLOW TEST:12.403 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":356,"completed":12,"skipped":190,"failed":0} +SSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:42:20.851: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Sep 7 07:42:25.494: INFO: Successfully updated pod "adopt-release-2lb9w" +STEP: Checking that the Job readopts the Pod +Sep 7 07:42:25.494: INFO: Waiting up to 15m0s for pod "adopt-release-2lb9w" in namespace "job-3133" to be "adopted" +Sep 7 07:42:25.535: INFO: Pod "adopt-release-2lb9w": Phase="Running", Reason="", readiness=true. Elapsed: 40.25649ms +Sep 7 07:42:27.573: INFO: Pod "adopt-release-2lb9w": Phase="Running", Reason="", readiness=true. Elapsed: 2.078987125s +Sep 7 07:42:27.573: INFO: Pod "adopt-release-2lb9w" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Sep 7 07:42:28.091: INFO: Successfully updated pod "adopt-release-2lb9w" +STEP: Checking that the Job releases the Pod +Sep 7 07:42:28.091: INFO: Waiting up to 15m0s for pod "adopt-release-2lb9w" in namespace "job-3133" to be "released" +Sep 7 07:42:28.109: INFO: Pod "adopt-release-2lb9w": Phase="Running", Reason="", readiness=true. Elapsed: 18.554758ms +Sep 7 07:42:30.130: INFO: Pod "adopt-release-2lb9w": Phase="Running", Reason="", readiness=true. Elapsed: 2.039811765s +Sep 7 07:42:30.130: INFO: Pod "adopt-release-2lb9w" satisfied condition "released" +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:188 +Sep 7 07:42:30.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-3133" for this suite. + +• [SLOW TEST:9.294 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":356,"completed":13,"skipped":194,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:42:30.146: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-7518 +[It] Should recreate evicted statefulset [Conformance] + test/e2e/framework/framework.go:652 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-7518 +STEP: Waiting until pod test-pod will start running in namespace statefulset-7518 +STEP: Creating statefulset with conflicting port in namespace statefulset-7518 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7518 +Sep 7 07:42:32.263: INFO: Observed stateful pod in namespace: statefulset-7518, name: ss-0, uid: 97d97e7e-35cf-4c92-85eb-1d44540e7bfd, status phase: Pending. Waiting for statefulset controller to delete. +Sep 7 07:42:32.289: INFO: Observed stateful pod in namespace: statefulset-7518, name: ss-0, uid: 97d97e7e-35cf-4c92-85eb-1d44540e7bfd, status phase: Failed. Waiting for statefulset controller to delete. +Sep 7 07:42:32.334: INFO: Observed stateful pod in namespace: statefulset-7518, name: ss-0, uid: 97d97e7e-35cf-4c92-85eb-1d44540e7bfd, status phase: Failed. Waiting for statefulset controller to delete. +Sep 7 07:42:32.342: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7518 +STEP: Removing pod with conflicting port in namespace statefulset-7518 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7518 and will be in running state +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Sep 7 07:42:34.428: INFO: Deleting all statefulset in ns statefulset-7518 +Sep 7 07:42:34.433: INFO: Scaling statefulset ss to 0 +Sep 7 07:42:44.478: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 07:42:44.482: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:188 +Sep 7 07:42:44.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7518" for this suite. + +• [SLOW TEST:14.384 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + Should recreate evicted statefulset [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":356,"completed":14,"skipped":214,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:42:44.529: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 07:42:44.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b350d6f5-e7f7-4985-aaaf-bfc131fb3e18" in namespace "projected-2145" to be "Succeeded or Failed" +Sep 7 07:42:44.598: INFO: Pod "downwardapi-volume-b350d6f5-e7f7-4985-aaaf-bfc131fb3e18": Phase="Pending", Reason="", readiness=false. Elapsed: 16.298814ms +Sep 7 07:42:46.604: INFO: Pod "downwardapi-volume-b350d6f5-e7f7-4985-aaaf-bfc131fb3e18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021989451s +Sep 7 07:42:48.616: INFO: Pod "downwardapi-volume-b350d6f5-e7f7-4985-aaaf-bfc131fb3e18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034411277s +STEP: Saw pod success +Sep 7 07:42:48.616: INFO: Pod "downwardapi-volume-b350d6f5-e7f7-4985-aaaf-bfc131fb3e18" satisfied condition "Succeeded or Failed" +Sep 7 07:42:48.619: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-b350d6f5-e7f7-4985-aaaf-bfc131fb3e18 container client-container: +STEP: delete the pod +Sep 7 07:42:48.641: INFO: Waiting for pod downwardapi-volume-b350d6f5-e7f7-4985-aaaf-bfc131fb3e18 to disappear +Sep 7 07:42:48.643: INFO: Pod downwardapi-volume-b350d6f5-e7f7-4985-aaaf-bfc131fb3e18 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 07:42:48.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2145" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":15,"skipped":232,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:42:48.650: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ConfigMap +STEP: Ensuring resource quota status captures configMap creation +STEP: Deleting a ConfigMap +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:188 +Sep 7 07:43:16.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6662" for this suite. + +• [SLOW TEST:28.130 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":356,"completed":16,"skipped":257,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:43:16.781: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 07:43:17.915: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 07:43:20.961: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a mutating webhook configuration +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 07:43:21.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5130" for this suite. +STEP: Destroying namespace "webhook-5130-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":356,"completed":17,"skipped":292,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:43:21.124: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating the pod +Sep 7 07:43:21.191: INFO: The status of Pod labelsupdate889497a9-d7ee-4632-b09f-30b4929da9dc is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:43:23.198: INFO: The status of Pod labelsupdate889497a9-d7ee-4632-b09f-30b4929da9dc is Running (Ready = true) +Sep 7 07:43:23.727: INFO: Successfully updated pod "labelsupdate889497a9-d7ee-4632-b09f-30b4929da9dc" +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 07:43:25.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8273" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":356,"completed":18,"skipped":298,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:43:25.788: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should allow substituting values in a volume subpath [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test substitution in volume subpath +Sep 7 07:43:25.838: INFO: Waiting up to 5m0s for pod "var-expansion-c48262a8-565c-4c9f-acf1-82ce37f4ebbf" in namespace "var-expansion-9639" to be "Succeeded or Failed" +Sep 7 07:43:25.849: INFO: Pod "var-expansion-c48262a8-565c-4c9f-acf1-82ce37f4ebbf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.993482ms +Sep 7 07:43:27.855: INFO: Pod "var-expansion-c48262a8-565c-4c9f-acf1-82ce37f4ebbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016851908s +Sep 7 07:43:29.883: INFO: Pod "var-expansion-c48262a8-565c-4c9f-acf1-82ce37f4ebbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0450466s +Sep 7 07:43:31.897: INFO: Pod "var-expansion-c48262a8-565c-4c9f-acf1-82ce37f4ebbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058706472s +STEP: Saw pod success +Sep 7 07:43:31.897: INFO: Pod "var-expansion-c48262a8-565c-4c9f-acf1-82ce37f4ebbf" satisfied condition "Succeeded or Failed" +Sep 7 07:43:31.904: INFO: Trying to get logs from node 172.31.51.96 pod var-expansion-c48262a8-565c-4c9f-acf1-82ce37f4ebbf container dapi-container: +STEP: delete the pod +Sep 7 07:43:31.938: INFO: Waiting for pod var-expansion-c48262a8-565c-4c9f-acf1-82ce37f4ebbf to disappear +Sep 7 07:43:31.944: INFO: Pod var-expansion-c48262a8-565c-4c9f-acf1-82ce37f4ebbf no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:188 +Sep 7 07:43:31.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9639" for this suite. + +• [SLOW TEST:6.167 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a volume subpath [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":356,"completed":19,"skipped":325,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:43:31.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name secret-test-53259a48-a4e9-4518-9222-dcfba7452aae +STEP: Creating a pod to test consume secrets +Sep 7 07:43:32.044: INFO: Waiting up to 5m0s for pod "pod-secrets-7ddd5280-3763-404c-ae4f-c47ad3d3f026" in namespace "secrets-3798" to be "Succeeded or Failed" +Sep 7 07:43:32.056: INFO: Pod "pod-secrets-7ddd5280-3763-404c-ae4f-c47ad3d3f026": Phase="Pending", Reason="", readiness=false. Elapsed: 12.00006ms +Sep 7 07:43:34.070: INFO: Pod "pod-secrets-7ddd5280-3763-404c-ae4f-c47ad3d3f026": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025299659s +Sep 7 07:43:36.073: INFO: Pod "pod-secrets-7ddd5280-3763-404c-ae4f-c47ad3d3f026": Phase="Running", Reason="", readiness=false. Elapsed: 4.02897933s +Sep 7 07:43:38.082: INFO: Pod "pod-secrets-7ddd5280-3763-404c-ae4f-c47ad3d3f026": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03777737s +STEP: Saw pod success +Sep 7 07:43:38.082: INFO: Pod "pod-secrets-7ddd5280-3763-404c-ae4f-c47ad3d3f026" satisfied condition "Succeeded or Failed" +Sep 7 07:43:38.084: INFO: Trying to get logs from node 172.31.51.96 pod pod-secrets-7ddd5280-3763-404c-ae4f-c47ad3d3f026 container secret-volume-test: +STEP: delete the pod +Sep 7 07:43:38.104: INFO: Waiting for pod pod-secrets-7ddd5280-3763-404c-ae4f-c47ad3d3f026 to disappear +Sep 7 07:43:38.106: INFO: Pod pod-secrets-7ddd5280-3763-404c-ae4f-c47ad3d3f026 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:188 +Sep 7 07:43:38.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3798" for this suite. + +• [SLOW TEST:6.158 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":356,"completed":20,"skipped":384,"failed":0} +S +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:43:38.113: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data +[It] should support subpaths with projected pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod pod-subpath-test-projected-5q74 +STEP: Creating a pod to test atomic-volume-subpath +Sep 7 07:43:38.160: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5q74" in namespace "subpath-7237" to be "Succeeded or Failed" +Sep 7 07:43:38.168: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Pending", Reason="", readiness=false. Elapsed: 7.575511ms +Sep 7 07:43:40.179: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018803119s +Sep 7 07:43:42.191: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=true. Elapsed: 4.030434445s +Sep 7 07:43:44.202: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=true. Elapsed: 6.041474536s +Sep 7 07:43:46.207: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=true. Elapsed: 8.046101132s +Sep 7 07:43:48.214: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=true. Elapsed: 10.053593641s +Sep 7 07:43:50.228: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=true. Elapsed: 12.067292537s +Sep 7 07:43:52.239: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=true. Elapsed: 14.078665284s +Sep 7 07:43:54.248: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=true. Elapsed: 16.087104162s +Sep 7 07:43:56.254: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=true. Elapsed: 18.093339343s +Sep 7 07:43:58.262: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=true. Elapsed: 20.1012873s +Sep 7 07:44:00.272: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=true. Elapsed: 22.11171452s +Sep 7 07:44:02.284: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Running", Reason="", readiness=false. Elapsed: 24.123939858s +Sep 7 07:44:04.296: INFO: Pod "pod-subpath-test-projected-5q74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.135538558s +STEP: Saw pod success +Sep 7 07:44:04.296: INFO: Pod "pod-subpath-test-projected-5q74" satisfied condition "Succeeded or Failed" +Sep 7 07:44:04.304: INFO: Trying to get logs from node 172.31.51.96 pod pod-subpath-test-projected-5q74 container test-container-subpath-projected-5q74: +STEP: delete the pod +Sep 7 07:44:04.322: INFO: Waiting for pod pod-subpath-test-projected-5q74 to disappear +Sep 7 07:44:04.327: INFO: Pod pod-subpath-test-projected-5q74 no longer exists +STEP: Deleting pod pod-subpath-test-projected-5q74 +Sep 7 07:44:04.327: INFO: Deleting pod "pod-subpath-test-projected-5q74" in namespace "subpath-7237" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:188 +Sep 7 07:44:04.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-7237" for this suite. + +• [SLOW TEST:26.232 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with projected pod [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]","total":356,"completed":21,"skipped":385,"failed":0} +SSSS +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:44:04.346: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should schedule multiple jobs concurrently [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a cronjob +STEP: Ensuring more than one job is running at a time +STEP: Ensuring at least two running jobs exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:188 +Sep 7 07:46:00.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-917" for this suite. + +• [SLOW TEST:116.112 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should schedule multiple jobs concurrently [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":356,"completed":22,"skipped":389,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:00.457: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test substitution in container's args +Sep 7 07:46:00.531: INFO: Waiting up to 5m0s for pod "var-expansion-f4415250-f36a-4d56-802a-8cc0eb87b566" in namespace "var-expansion-9831" to be "Succeeded or Failed" +Sep 7 07:46:00.542: INFO: Pod "var-expansion-f4415250-f36a-4d56-802a-8cc0eb87b566": Phase="Pending", Reason="", readiness=false. Elapsed: 10.629355ms +Sep 7 07:46:02.553: INFO: Pod "var-expansion-f4415250-f36a-4d56-802a-8cc0eb87b566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021940313s +Sep 7 07:46:04.575: INFO: Pod "var-expansion-f4415250-f36a-4d56-802a-8cc0eb87b566": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043944454s +Sep 7 07:46:06.583: INFO: Pod "var-expansion-f4415250-f36a-4d56-802a-8cc0eb87b566": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051934182s +STEP: Saw pod success +Sep 7 07:46:06.583: INFO: Pod "var-expansion-f4415250-f36a-4d56-802a-8cc0eb87b566" satisfied condition "Succeeded or Failed" +Sep 7 07:46:06.587: INFO: Trying to get logs from node 172.31.51.96 pod var-expansion-f4415250-f36a-4d56-802a-8cc0eb87b566 container dapi-container: +STEP: delete the pod +Sep 7 07:46:06.635: INFO: Waiting for pod var-expansion-f4415250-f36a-4d56-802a-8cc0eb87b566 to disappear +Sep 7 07:46:06.642: INFO: Pod var-expansion-f4415250-f36a-4d56-802a-8cc0eb87b566 no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:188 +Sep 7 07:46:06.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9831" for this suite. + +• [SLOW TEST:6.212 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":356,"completed":23,"skipped":400,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:06.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should patch a secret [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:188 +Sep 7 07:46:06.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6807" for this suite. +•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":356,"completed":24,"skipped":416,"failed":0} +SS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:06.748: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name secret-test-39aa9638-a006-4137-ba6f-d350d0542119 +STEP: Creating a pod to test consume secrets +Sep 7 07:46:06.787: INFO: Waiting up to 5m0s for pod "pod-secrets-738d5609-b1e7-429a-a565-b9a5c6bd0c23" in namespace "secrets-1680" to be "Succeeded or Failed" +Sep 7 07:46:06.791: INFO: Pod "pod-secrets-738d5609-b1e7-429a-a565-b9a5c6bd0c23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662367ms +Sep 7 07:46:08.800: INFO: Pod "pod-secrets-738d5609-b1e7-429a-a565-b9a5c6bd0c23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013527519s +Sep 7 07:46:10.820: INFO: Pod "pod-secrets-738d5609-b1e7-429a-a565-b9a5c6bd0c23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032761293s +STEP: Saw pod success +Sep 7 07:46:10.820: INFO: Pod "pod-secrets-738d5609-b1e7-429a-a565-b9a5c6bd0c23" satisfied condition "Succeeded or Failed" +Sep 7 07:46:10.824: INFO: Trying to get logs from node 172.31.51.96 pod pod-secrets-738d5609-b1e7-429a-a565-b9a5c6bd0c23 container secret-volume-test: +STEP: delete the pod +Sep 7 07:46:10.846: INFO: Waiting for pod pod-secrets-738d5609-b1e7-429a-a565-b9a5c6bd0c23 to disappear +Sep 7 07:46:10.850: INFO: Pod pod-secrets-738d5609-b1e7-429a-a565-b9a5c6bd0c23 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:188 +Sep 7 07:46:10.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1680" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":25,"skipped":418,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:10.866: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename endpointslicemirroring +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/network/endpointslicemirroring.go:41 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/framework/framework.go:652 +STEP: mirroring a new custom Endpoint +Sep 7 07:46:10.938: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint +Sep 7 07:46:12.969: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 +STEP: mirroring deletion of a custom Endpoint +Sep 7 07:46:14.992: INFO: Waiting for 0 EndpointSlices to exist, got 1 +[AfterEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/framework.go:188 +Sep 7 07:46:17.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslicemirroring-7169" for this suite. + +• [SLOW TEST:6.158 seconds] +[sig-network] EndpointSliceMirroring +test/e2e/network/common/framework.go:23 + should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":356,"completed":26,"skipped":434,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:17.024: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:46:19.192: INFO: Deleting pod "var-expansion-85ddb271-7499-418e-b47c-4735d7112dcf" in namespace "var-expansion-1425" +Sep 7 07:46:19.201: INFO: Wait up to 5m0s for pod "var-expansion-85ddb271-7499-418e-b47c-4735d7112dcf" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:188 +Sep 7 07:46:21.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-1425" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":356,"completed":27,"skipped":442,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:21.252: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 +[It] should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pod +Sep 7 07:46:21.297: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:188 +Sep 7 07:46:24.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-7084" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":356,"completed":28,"skipped":453,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:24.808: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 +[It] should adopt matching pods on creation [Conformance] + test/e2e/framework/framework.go:652 +STEP: Given a Pod with a 'name' label pod-adoption is created +Sep 7 07:46:24.890: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:46:26.898: INFO: The status of Pod pod-adoption is Running (Ready = true) +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:188 +Sep 7 07:46:27.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-1059" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":356,"completed":29,"skipped":508,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:27.930: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name projected-configmap-test-volume-map-9fe8f7bc-2f98-4f76-8e64-65ba7e46d245 +STEP: Creating a pod to test consume configMaps +Sep 7 07:46:27.986: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c9e958b9-0b96-4d36-aff6-4ded36d5dbbd" in namespace "projected-4526" to be "Succeeded or Failed" +Sep 7 07:46:28.001: INFO: Pod "pod-projected-configmaps-c9e958b9-0b96-4d36-aff6-4ded36d5dbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.915917ms +Sep 7 07:46:30.033: INFO: Pod "pod-projected-configmaps-c9e958b9-0b96-4d36-aff6-4ded36d5dbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047677235s +Sep 7 07:46:32.042: INFO: Pod "pod-projected-configmaps-c9e958b9-0b96-4d36-aff6-4ded36d5dbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056576947s +Sep 7 07:46:34.047: INFO: Pod "pod-projected-configmaps-c9e958b9-0b96-4d36-aff6-4ded36d5dbbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061174783s +STEP: Saw pod success +Sep 7 07:46:34.047: INFO: Pod "pod-projected-configmaps-c9e958b9-0b96-4d36-aff6-4ded36d5dbbd" satisfied condition "Succeeded or Failed" +Sep 7 07:46:34.049: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-configmaps-c9e958b9-0b96-4d36-aff6-4ded36d5dbbd container agnhost-container: +STEP: delete the pod +Sep 7 07:46:34.071: INFO: Waiting for pod pod-projected-configmaps-c9e958b9-0b96-4d36-aff6-4ded36d5dbbd to disappear +Sep 7 07:46:34.074: INFO: Pod pod-projected-configmaps-c9e958b9-0b96-4d36-aff6-4ded36d5dbbd no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:188 +Sep 7 07:46:34.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4526" for this suite. + +• [SLOW TEST:6.156 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":356,"completed":30,"skipped":546,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:34.087: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name cm-test-opt-del-27189e65-e690-47da-882c-6907f3ff242c +STEP: Creating configMap with name cm-test-opt-upd-23a5b816-28f4-4133-ae31-3f570cc4b480 +STEP: Creating the pod +Sep 7 07:46:34.172: INFO: The status of Pod pod-configmaps-54279897-334a-4880-a19e-d4bcea104b08 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:46:36.177: INFO: The status of Pod pod-configmaps-54279897-334a-4880-a19e-d4bcea104b08 is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-27189e65-e690-47da-882c-6907f3ff242c +STEP: Updating configmap cm-test-opt-upd-23a5b816-28f4-4133-ae31-3f570cc4b480 +STEP: Creating configMap with name cm-test-opt-create-7c3973ca-0539-4b15-b068-03018b547143 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 07:46:38.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1769" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":356,"completed":31,"skipped":553,"failed":0} + +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:38.334: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 07:46:40.857: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Sep 7 07:46:42.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 7, 46, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 46, 40, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 7, 46, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 46, 40, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 07:46:45.962: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + test/e2e/framework/framework.go:652 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 07:46:46.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2194" for this suite. +STEP: Destroying namespace "webhook-2194-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:7.924 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + listing mutating webhooks should work [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":356,"completed":32,"skipped":553,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:46:46.259: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:61 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod busybox-2c2657a8-3001-402c-aa87-74e900a45999 in namespace container-probe-9976 +Sep 7 07:46:48.393: INFO: Started pod busybox-2c2657a8-3001-402c-aa87-74e900a45999 in namespace container-probe-9976 +STEP: checking the pod's current state and verifying that restartCount is present +Sep 7 07:46:48.395: INFO: Initial restart count of pod busybox-2c2657a8-3001-402c-aa87-74e900a45999 is 0 +Sep 7 07:47:38.664: INFO: Restart count of pod container-probe-9976/busybox-2c2657a8-3001-402c-aa87-74e900a45999 is now 1 (50.268668274s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:188 +Sep 7 07:47:38.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9976" for this suite. + +• [SLOW TEST:52.434 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":356,"completed":33,"skipped":585,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:47:38.692: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name configmap-test-volume-map-ee66e86e-2e61-4686-9829-50be7af1aa80 +STEP: Creating a pod to test consume configMaps +Sep 7 07:47:38.753: INFO: Waiting up to 5m0s for pod "pod-configmaps-63480b34-4d01-4726-a1d4-3a8136a9c841" in namespace "configmap-6823" to be "Succeeded or Failed" +Sep 7 07:47:38.760: INFO: Pod "pod-configmaps-63480b34-4d01-4726-a1d4-3a8136a9c841": Phase="Pending", Reason="", readiness=false. Elapsed: 6.872203ms +Sep 7 07:47:40.771: INFO: Pod "pod-configmaps-63480b34-4d01-4726-a1d4-3a8136a9c841": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018098885s +Sep 7 07:47:42.783: INFO: Pod "pod-configmaps-63480b34-4d01-4726-a1d4-3a8136a9c841": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029680618s +STEP: Saw pod success +Sep 7 07:47:42.783: INFO: Pod "pod-configmaps-63480b34-4d01-4726-a1d4-3a8136a9c841" satisfied condition "Succeeded or Failed" +Sep 7 07:47:42.788: INFO: Trying to get logs from node 172.31.51.96 pod pod-configmaps-63480b34-4d01-4726-a1d4-3a8136a9c841 container agnhost-container: +STEP: delete the pod +Sep 7 07:47:42.806: INFO: Waiting for pod pod-configmaps-63480b34-4d01-4726-a1d4-3a8136a9c841 to disappear +Sep 7 07:47:42.811: INFO: Pod pod-configmaps-63480b34-4d01-4726-a1d4-3a8136a9c841 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 07:47:42.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6823" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":356,"completed":34,"skipped":610,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:47:42.820: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename runtimeclass +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Deleting RuntimeClass runtimeclass-5429-delete-me +STEP: Waiting for the RuntimeClass to disappear +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:188 +Sep 7 07:47:42.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-5429" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]","total":356,"completed":35,"skipped":621,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:47:42.889: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:191 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:47:42.918: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Sep 7 07:47:42.933: INFO: The status of Pod pod-exec-websocket-041f7ffe-dab9-42e2-b4f2-30528497edca is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:47:44.945: INFO: The status of Pod pod-exec-websocket-041f7ffe-dab9-42e2-b4f2-30528497edca is Running (Ready = true) +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:188 +Sep 7 07:47:45.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8747" for this suite. +•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":356,"completed":36,"skipped":638,"failed":0} +SS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:47:45.043: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 +STEP: create the container to handle the HTTPGet hook request. +Sep 7 07:47:45.107: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:47:47.146: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:47:49.114: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:47:51.112: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:47:53.120: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the pod with lifecycle hook +Sep 7 07:47:53.140: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:47:55.153: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Sep 7 07:47:55.167: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Sep 7 07:47:55.170: INFO: Pod pod-with-prestop-http-hook still exists +Sep 7 07:47:57.181: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Sep 7 07:47:57.201: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:188 +Sep 7 07:47:57.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-2126" for this suite. + +• [SLOW TEST:12.183 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":356,"completed":37,"skipped":640,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:47:57.227: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name configmap-test-upd-b08117cf-9f69-4f75-bc61-571240a64718 +STEP: Creating the pod +Sep 7 07:47:57.293: INFO: The status of Pod pod-configmaps-d3304b50-294d-44a0-a8cd-99a94bde7ba3 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:47:59.307: INFO: The status of Pod pod-configmaps-d3304b50-294d-44a0-a8cd-99a94bde7ba3 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:48:01.318: INFO: The status of Pod pod-configmaps-d3304b50-294d-44a0-a8cd-99a94bde7ba3 is Running (Ready = true) +STEP: Updating configmap configmap-test-upd-b08117cf-9f69-4f75-bc61-571240a64718 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 07:49:31.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2753" for this suite. + +• [SLOW TEST:94.559 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":356,"completed":38,"skipped":647,"failed":0} +SSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:49:31.785: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/framework/framework.go:652 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +Sep 7 07:49:31.857: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:49:33.862: INFO: The status of Pod pod-adoption-release is Running (Ready = true) +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Sep 7 07:49:34.889: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:188 +Sep 7 07:49:35.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3733" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":356,"completed":39,"skipped":653,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:49:35.946: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/framework/framework.go:652 +STEP: validating cluster-info +Sep 7 07:49:36.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-8148 cluster-info' +Sep 7 07:49:36.322: INFO: stderr: "" +Sep 7 07:49:36.322: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.68.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 07:49:36.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8148" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":356,"completed":40,"skipped":683,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:49:36.331: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name configmap-test-volume-879c3687-a10d-4f87-b891-85953457a77c +STEP: Creating a pod to test consume configMaps +Sep 7 07:49:36.384: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d0dc72d-0d40-4893-8889-648898cb2bb9" in namespace "configmap-3567" to be "Succeeded or Failed" +Sep 7 07:49:36.389: INFO: Pod "pod-configmaps-8d0dc72d-0d40-4893-8889-648898cb2bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.0749ms +Sep 7 07:49:38.394: INFO: Pod "pod-configmaps-8d0dc72d-0d40-4893-8889-648898cb2bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010038214s +Sep 7 07:49:40.410: INFO: Pod "pod-configmaps-8d0dc72d-0d40-4893-8889-648898cb2bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025971653s +Sep 7 07:49:42.423: INFO: Pod "pod-configmaps-8d0dc72d-0d40-4893-8889-648898cb2bb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039043195s +STEP: Saw pod success +Sep 7 07:49:42.423: INFO: Pod "pod-configmaps-8d0dc72d-0d40-4893-8889-648898cb2bb9" satisfied condition "Succeeded or Failed" +Sep 7 07:49:42.431: INFO: Trying to get logs from node 172.31.51.96 pod pod-configmaps-8d0dc72d-0d40-4893-8889-648898cb2bb9 container agnhost-container: +STEP: delete the pod +Sep 7 07:49:42.464: INFO: Waiting for pod pod-configmaps-8d0dc72d-0d40-4893-8889-648898cb2bb9 to disappear +Sep 7 07:49:42.505: INFO: Pod pod-configmaps-8d0dc72d-0d40-4893-8889-648898cb2bb9 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 07:49:42.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3567" for this suite. + +• [SLOW TEST:6.207 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":356,"completed":41,"skipped":692,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:49:42.538: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Sep 7 07:49:42.943: INFO: Pod name wrapped-volume-race-9f49de96-5269-4e64-a6aa-cc059c43ae42: Found 1 pods out of 5 +Sep 7 07:49:47.954: INFO: Pod name wrapped-volume-race-9f49de96-5269-4e64-a6aa-cc059c43ae42: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-9f49de96-5269-4e64-a6aa-cc059c43ae42 in namespace emptydir-wrapper-8890, will wait for the garbage collector to delete the pods +Sep 7 07:49:48.031: INFO: Deleting ReplicationController wrapped-volume-race-9f49de96-5269-4e64-a6aa-cc059c43ae42 took: 11.205613ms +Sep 7 07:49:48.133: INFO: Terminating ReplicationController wrapped-volume-race-9f49de96-5269-4e64-a6aa-cc059c43ae42 pods took: 101.582814ms +STEP: Creating RC which spawns configmap-volume pods +Sep 7 07:49:50.805: INFO: Pod name wrapped-volume-race-bde52cba-23ea-4b08-929f-c3a88d4ae653: Found 0 pods out of 5 +Sep 7 07:49:55.858: INFO: Pod name wrapped-volume-race-bde52cba-23ea-4b08-929f-c3a88d4ae653: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-bde52cba-23ea-4b08-929f-c3a88d4ae653 in namespace emptydir-wrapper-8890, will wait for the garbage collector to delete the pods +Sep 7 07:49:57.951: INFO: Deleting ReplicationController wrapped-volume-race-bde52cba-23ea-4b08-929f-c3a88d4ae653 took: 7.624712ms +Sep 7 07:49:58.152: INFO: Terminating ReplicationController wrapped-volume-race-bde52cba-23ea-4b08-929f-c3a88d4ae653 pods took: 200.990645ms +STEP: Creating RC which spawns configmap-volume pods +Sep 7 07:50:01.485: INFO: Pod name wrapped-volume-race-e0e65cc2-4a9e-48a6-823b-7a444b8a1466: Found 0 pods out of 5 +Sep 7 07:50:06.502: INFO: Pod name wrapped-volume-race-e0e65cc2-4a9e-48a6-823b-7a444b8a1466: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-e0e65cc2-4a9e-48a6-823b-7a444b8a1466 in namespace emptydir-wrapper-8890, will wait for the garbage collector to delete the pods +Sep 7 07:50:06.585: INFO: Deleting ReplicationController wrapped-volume-race-e0e65cc2-4a9e-48a6-823b-7a444b8a1466 took: 14.339279ms +Sep 7 07:50:06.686: INFO: Terminating ReplicationController wrapped-volume-race-e0e65cc2-4a9e-48a6-823b-7a444b8a1466 pods took: 100.672575ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:188 +Sep 7 07:50:10.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-8890" for this suite. + +• [SLOW TEST:27.856 seconds] +[sig-storage] EmptyDir wrapper volumes +test/e2e/storage/utils/framework.go:23 + should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":356,"completed":42,"skipped":693,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:50:10.395: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 07:50:10.473: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ba662e7-3686-4441-a89d-087bcae9530b" in namespace "downward-api-5940" to be "Succeeded or Failed" +Sep 7 07:50:10.480: INFO: Pod "downwardapi-volume-6ba662e7-3686-4441-a89d-087bcae9530b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.652859ms +Sep 7 07:50:12.536: INFO: Pod "downwardapi-volume-6ba662e7-3686-4441-a89d-087bcae9530b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063054742s +Sep 7 07:50:14.564: INFO: Pod "downwardapi-volume-6ba662e7-3686-4441-a89d-087bcae9530b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09070162s +Sep 7 07:50:16.571: INFO: Pod "downwardapi-volume-6ba662e7-3686-4441-a89d-087bcae9530b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098024874s +STEP: Saw pod success +Sep 7 07:50:16.571: INFO: Pod "downwardapi-volume-6ba662e7-3686-4441-a89d-087bcae9530b" satisfied condition "Succeeded or Failed" +Sep 7 07:50:16.574: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-6ba662e7-3686-4441-a89d-087bcae9530b container client-container: +STEP: delete the pod +Sep 7 07:50:16.595: INFO: Waiting for pod downwardapi-volume-6ba662e7-3686-4441-a89d-087bcae9530b to disappear +Sep 7 07:50:16.601: INFO: Pod downwardapi-volume-6ba662e7-3686-4441-a89d-087bcae9530b no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 07:50:16.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5940" for this suite. + +• [SLOW TEST:6.225 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":356,"completed":43,"skipped":709,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:50:16.620: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Sep 7 07:50:16.681: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-301 a7667146-5a7c-4728-825a-c6ddd100d5f9 6930 0 2022-09-07 07:50:16 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2022-09-07 07:50:16 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qkh65,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qkh65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:50:16.691: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:50:18.704: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Sep 7 07:50:18.704: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-301 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 07:50:18.704: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 07:50:18.705: INFO: ExecWithOptions: Clientset creation +Sep 7 07:50:18.705: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/dns-301/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +STEP: Verifying customized DNS server is configured on pod... +Sep 7 07:50:18.862: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-301 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 07:50:18.862: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 07:50:18.863: INFO: ExecWithOptions: Clientset creation +Sep 7 07:50:18.863: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/dns-301/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Sep 7 07:50:18.998: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:188 +Sep 7 07:50:19.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-301" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":356,"completed":44,"skipped":734,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:50:19.024: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a ResourceQuota with terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a long running pod +STEP: Ensuring resource quota with not terminating scope captures the pod usage +STEP: Ensuring resource quota with terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a terminating pod +STEP: Ensuring resource quota with terminating scope captures the pod usage +STEP: Ensuring resource quota with not terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:188 +Sep 7 07:50:35.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-8893" for this suite. + +• [SLOW TEST:16.258 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":356,"completed":45,"skipped":742,"failed":0} +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:50:35.282: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:50:35.329: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Sep 7 07:50:35.341: INFO: Pod name sample-pod: Found 0 pods out of 1 +Sep 7 07:50:40.370: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Sep 7 07:50:40.370: INFO: Creating deployment "test-rolling-update-deployment" +Sep 7 07:50:40.376: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Sep 7 07:50:40.383: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Sep 7 07:50:42.417: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Sep 7 07:50:42.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 7, 50, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 50, 40, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 7, 50, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 50, 40, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67c8f74c6c\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 07:50:44.441: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Sep 7 07:50:44.450: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4324 4a4805fd-de1c-4f18-b3d6-d4c60a1e13f5 7093 1 2022-09-07 07:50:40 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-09-07 07:50:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 07:50:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0004cd6f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-09-07 07:50:40 +0000 UTC,LastTransitionTime:2022-09-07 07:50:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67c8f74c6c" has successfully progressed.,LastUpdateTime:2022-09-07 07:50:42 +0000 UTC,LastTransitionTime:2022-09-07 07:50:40 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Sep 7 07:50:44.453: INFO: New ReplicaSet "test-rolling-update-deployment-67c8f74c6c" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67c8f74c6c deployment-4324 d32cf7f6-4152-41da-8aa2-edf36131c1b2 7082 1 2022-09-07 07:50:40 +0000 UTC map[name:sample-pod pod-template-hash:67c8f74c6c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 4a4805fd-de1c-4f18-b3d6-d4c60a1e13f5 0xc0004cdf87 0xc0004cdf88}] [] [{kube-controller-manager Update apps/v1 2022-09-07 07:50:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a4805fd-de1c-4f18-b3d6-d4c60a1e13f5\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 07:50:42 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67c8f74c6c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67c8f74c6c] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00317e0b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Sep 7 07:50:44.454: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Sep 7 07:50:44.454: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4324 0747b762-466e-4bf1-8abf-8ad816dcae5e 7092 2 2022-09-07 07:50:35 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 4a4805fd-de1c-4f18-b3d6-d4c60a1e13f5 0xc0004cdccf 0xc0004cdce0}] [] [{e2e.test Update apps/v1 2022-09-07 07:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 07:50:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a4805fd-de1c-4f18-b3d6-d4c60a1e13f5\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-09-07 07:50:42 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0004cdda8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Sep 7 07:50:44.457: INFO: Pod "test-rolling-update-deployment-67c8f74c6c-9fkdf" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-67c8f74c6c-9fkdf test-rolling-update-deployment-67c8f74c6c- deployment-4324 c1cd8c0d-c1db-403b-858a-a392b092d4d3 7081 0 2022-09-07 07:50:40 +0000 UTC map[name:sample-pod pod-template-hash:67c8f74c6c] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67c8f74c6c d32cf7f6-4152-41da-8aa2-edf36131c1b2 0xc0002d4727 0xc0002d4728}] [] [{kube-controller-manager Update v1 2022-09-07 07:50:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d32cf7f6-4152-41da-8aa2-edf36131c1b2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:50:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.75.53\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kc5sq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc5sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:50:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:50:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:50:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:50:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:172.20.75.53,StartTime:2022-09-07 07:50:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 07:50:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://2b0b88dd20afbebff136faec957475184d90de4ff068e202323970cdc51da08d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.75.53,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:188 +Sep 7 07:50:44.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4324" for this suite. + +• [SLOW TEST:9.199 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":356,"completed":46,"skipped":742,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:50:44.482: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 +STEP: create the container to handle the HTTPGet hook request. +Sep 7 07:50:44.544: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:50:46.551: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the pod with lifecycle hook +Sep 7 07:50:46.608: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:50:48.618: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Sep 7 07:50:48.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Sep 7 07:50:48.645: INFO: Pod pod-with-poststart-exec-hook still exists +Sep 7 07:50:50.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Sep 7 07:50:50.663: INFO: Pod pod-with-poststart-exec-hook still exists +Sep 7 07:50:52.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Sep 7 07:50:52.657: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:188 +Sep 7 07:50:52.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-2985" for this suite. + +• [SLOW TEST:8.185 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":356,"completed":47,"skipped":784,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should delete a collection of services [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:50:52.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should delete a collection of services [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a collection of services +Sep 7 07:50:52.704: INFO: Creating e2e-svc-a-fd6gt +Sep 7 07:50:52.710: INFO: Creating e2e-svc-b-8m74f +Sep 7 07:50:52.723: INFO: Creating e2e-svc-c-w9j7l +STEP: deleting service collection +Sep 7 07:50:52.765: INFO: Collection of services has been deleted +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 07:50:52.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6472" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +•{"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":356,"completed":48,"skipped":833,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:50:52.772: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:188 +Sep 7 07:50:52.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5357" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":356,"completed":49,"skipped":846,"failed":0} +SSSSSS +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:50:53.027: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should find a service from listing all namespaces [Conformance] + test/e2e/framework/framework.go:652 +STEP: fetching services +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 07:50:53.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-958" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":356,"completed":50,"skipped":852,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:50:53.151: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap configmap-314/configmap-test-e91e15e8-8948-4830-a886-cf63761ce11b +STEP: Creating a pod to test consume configMaps +Sep 7 07:50:53.275: INFO: Waiting up to 5m0s for pod "pod-configmaps-4413221d-2215-4eda-9691-f4912fb38a2b" in namespace "configmap-314" to be "Succeeded or Failed" +Sep 7 07:50:53.319: INFO: Pod "pod-configmaps-4413221d-2215-4eda-9691-f4912fb38a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.083011ms +Sep 7 07:50:55.332: INFO: Pod "pod-configmaps-4413221d-2215-4eda-9691-f4912fb38a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057146557s +Sep 7 07:50:57.355: INFO: Pod "pod-configmaps-4413221d-2215-4eda-9691-f4912fb38a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080061086s +Sep 7 07:50:59.364: INFO: Pod "pod-configmaps-4413221d-2215-4eda-9691-f4912fb38a2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089020002s +STEP: Saw pod success +Sep 7 07:50:59.364: INFO: Pod "pod-configmaps-4413221d-2215-4eda-9691-f4912fb38a2b" satisfied condition "Succeeded or Failed" +Sep 7 07:50:59.367: INFO: Trying to get logs from node 172.31.51.96 pod pod-configmaps-4413221d-2215-4eda-9691-f4912fb38a2b container env-test: +STEP: delete the pod +Sep 7 07:50:59.420: INFO: Waiting for pod pod-configmaps-4413221d-2215-4eda-9691-f4912fb38a2b to disappear +Sep 7 07:50:59.429: INFO: Pod pod-configmaps-4413221d-2215-4eda-9691-f4912fb38a2b no longer exists +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 07:50:59.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-314" for this suite. + +• [SLOW TEST:6.291 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":356,"completed":51,"skipped":884,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:50:59.443: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name configmap-test-volume-2f03f8af-25b0-4e0e-aefb-d325f17e4474 +STEP: Creating a pod to test consume configMaps +Sep 7 07:50:59.523: INFO: Waiting up to 5m0s for pod "pod-configmaps-72217af1-a350-4e69-9263-0f0760bf6944" in namespace "configmap-1884" to be "Succeeded or Failed" +Sep 7 07:50:59.541: INFO: Pod "pod-configmaps-72217af1-a350-4e69-9263-0f0760bf6944": Phase="Pending", Reason="", readiness=false. Elapsed: 17.854259ms +Sep 7 07:51:01.565: INFO: Pod "pod-configmaps-72217af1-a350-4e69-9263-0f0760bf6944": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041846868s +Sep 7 07:51:03.580: INFO: Pod "pod-configmaps-72217af1-a350-4e69-9263-0f0760bf6944": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056819971s +Sep 7 07:51:05.593: INFO: Pod "pod-configmaps-72217af1-a350-4e69-9263-0f0760bf6944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069831041s +STEP: Saw pod success +Sep 7 07:51:05.593: INFO: Pod "pod-configmaps-72217af1-a350-4e69-9263-0f0760bf6944" satisfied condition "Succeeded or Failed" +Sep 7 07:51:05.596: INFO: Trying to get logs from node 172.31.51.96 pod pod-configmaps-72217af1-a350-4e69-9263-0f0760bf6944 container agnhost-container: +STEP: delete the pod +Sep 7 07:51:05.628: INFO: Waiting for pod pod-configmaps-72217af1-a350-4e69-9263-0f0760bf6944 to disappear +Sep 7 07:51:05.631: INFO: Pod pod-configmaps-72217af1-a350-4e69-9263-0f0760bf6944 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 07:51:05.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1884" for this suite. + +• [SLOW TEST:6.198 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":52,"skipped":892,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:51:05.641: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should receive events on concurrent watches in same order [Conformance] + test/e2e/framework/framework.go:652 +STEP: getting a starting resourceVersion +STEP: starting a background goroutine to produce watch events +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:188 +Sep 7 07:51:08.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-9269" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":356,"completed":53,"skipped":897,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:51:08.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:40 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:51:08.702: INFO: The status of Pod busybox-readonly-fs8091ce75-66d7-4ea2-b8ac-247235d02224 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:51:10.716: INFO: The status of Pod busybox-readonly-fs8091ce75-66d7-4ea2-b8ac-247235d02224 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:188 +Sep 7 07:51:10.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-2344" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":54,"skipped":948,"failed":0} +SSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:51:10.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-8041a775-572b-4710-a5b8-1fe498ccb2ca +STEP: Creating the pod +Sep 7 07:51:10.821: INFO: The status of Pod pod-projected-configmaps-53f7578b-1b1c-47c3-bf59-ca71d1ec484f is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:51:12.833: INFO: The status of Pod pod-projected-configmaps-53f7578b-1b1c-47c3-bf59-ca71d1ec484f is Running (Ready = true) +STEP: Updating configmap projected-configmap-test-upd-8041a775-572b-4710-a5b8-1fe498ccb2ca +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:188 +Sep 7 07:51:14.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7607" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":356,"completed":55,"skipped":952,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:51:14.885: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name secret-test-map-fe78d78d-f3ed-43c1-93ee-90292b8a27c6 +STEP: Creating a pod to test consume secrets +Sep 7 07:51:14.938: INFO: Waiting up to 5m0s for pod "pod-secrets-44ccfe3c-ed50-4502-a27e-a60b9810c35d" in namespace "secrets-8742" to be "Succeeded or Failed" +Sep 7 07:51:14.949: INFO: Pod "pod-secrets-44ccfe3c-ed50-4502-a27e-a60b9810c35d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.224617ms +Sep 7 07:51:16.965: INFO: Pod "pod-secrets-44ccfe3c-ed50-4502-a27e-a60b9810c35d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026308185s +Sep 7 07:51:18.970: INFO: Pod "pod-secrets-44ccfe3c-ed50-4502-a27e-a60b9810c35d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031985386s +STEP: Saw pod success +Sep 7 07:51:18.970: INFO: Pod "pod-secrets-44ccfe3c-ed50-4502-a27e-a60b9810c35d" satisfied condition "Succeeded or Failed" +Sep 7 07:51:18.973: INFO: Trying to get logs from node 172.31.51.96 pod pod-secrets-44ccfe3c-ed50-4502-a27e-a60b9810c35d container secret-volume-test: +STEP: delete the pod +Sep 7 07:51:18.988: INFO: Waiting for pod pod-secrets-44ccfe3c-ed50-4502-a27e-a60b9810c35d to disappear +Sep 7 07:51:18.995: INFO: Pod pod-secrets-44ccfe3c-ed50-4502-a27e-a60b9810c35d no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:188 +Sep 7 07:51:18.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-8742" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":56,"skipped":960,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:51:19.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:188 +Sep 7 07:51:32.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-7185" for this suite. +STEP: Destroying namespace "nsdeletetest-8817" for this suite. +Sep 7 07:51:32.159: INFO: Namespace nsdeletetest-8817 was already deleted +STEP: Destroying namespace "nsdeletetest-1322" for this suite. + +• [SLOW TEST:13.156 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":356,"completed":57,"skipped":964,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:51:32.170: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 07:51:32.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8812" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":356,"completed":58,"skipped":973,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:51:32.242: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Performing setup for networking test in namespace pod-network-test-8807 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Sep 7 07:51:32.273: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Sep 7 07:51:32.319: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:51:34.329: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:51:36.325: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:51:38.337: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:51:40.328: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:51:42.326: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:51:44.343: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:51:46.330: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:51:48.333: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:51:50.331: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:51:52.330: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:51:54.331: INFO: The status of Pod netserver-0 is Running (Ready = true) +Sep 7 07:51:54.336: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Sep 7 07:51:56.386: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Sep 7 07:51:56.386: INFO: Going to poll 172.20.75.61 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Sep 7 07:51:56.390: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.20.75.61 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8807 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 07:51:56.390: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 07:51:56.391: INFO: ExecWithOptions: Clientset creation +Sep 7 07:51:56.391: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/pod-network-test-8807/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.20.75.61+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Sep 7 07:51:57.478: INFO: Found all 1 expected endpoints: [netserver-0] +Sep 7 07:51:57.478: INFO: Going to poll 172.20.97.86 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Sep 7 07:51:57.488: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.20.97.86 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8807 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 07:51:57.488: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 07:51:57.489: INFO: ExecWithOptions: Clientset creation +Sep 7 07:51:57.489: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/pod-network-test-8807/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+172.20.97.86+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Sep 7 07:51:58.580: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:188 +Sep 7 07:51:58.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-8807" for this suite. + +• [SLOW TEST:26.361 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":59,"skipped":982,"failed":0} +SS +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:51:58.603: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] should validate Deployment Status endpoints [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a Deployment +Sep 7 07:51:58.653: INFO: Creating simple deployment test-deployment-45w7g +Sep 7 07:51:58.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 7, 51, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 51, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 7, 51, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 51, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-45w7g-688c4d6789\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 07:52:00.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 7, 51, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 51, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 7, 51, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 51, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-45w7g-688c4d6789\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Getting /status +Sep 7 07:52:02.771: INFO: Deployment test-deployment-45w7g has Conditions: [{Available True 2022-09-07 07:52:00 +0000 UTC 2022-09-07 07:52:00 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2022-09-07 07:52:00 +0000 UTC 2022-09-07 07:51:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-45w7g-688c4d6789" has successfully progressed.}] +STEP: updating Deployment Status +Sep 7 07:52:02.782: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 7, 52, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 52, 0, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 7, 52, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 7, 51, 58, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-45w7g-688c4d6789\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated +Sep 7 07:52:02.787: INFO: Observed &Deployment event: ADDED +Sep 7 07:52:02.787: INFO: Observed Deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-09-07 07:51:58 +0000 UTC 2022-09-07 07:51:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-45w7g-688c4d6789"} +Sep 7 07:52:02.787: INFO: Observed &Deployment event: MODIFIED +Sep 7 07:52:02.787: INFO: Observed Deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-09-07 07:51:58 +0000 UTC 2022-09-07 07:51:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-45w7g-688c4d6789"} +Sep 7 07:52:02.787: INFO: Observed Deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-09-07 07:51:58 +0000 UTC 2022-09-07 07:51:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Sep 7 07:52:02.787: INFO: Observed &Deployment event: MODIFIED +Sep 7 07:52:02.787: INFO: Observed Deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-09-07 07:51:58 +0000 UTC 2022-09-07 07:51:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Sep 7 07:52:02.787: INFO: Observed Deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-09-07 07:51:58 +0000 UTC 2022-09-07 07:51:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-45w7g-688c4d6789" is progressing.} +Sep 7 07:52:02.787: INFO: Observed &Deployment event: MODIFIED +Sep 7 07:52:02.787: INFO: Observed Deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-09-07 07:52:00 +0000 UTC 2022-09-07 07:52:00 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Sep 7 07:52:02.787: INFO: Observed Deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-09-07 07:52:00 +0000 UTC 2022-09-07 07:51:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-45w7g-688c4d6789" has successfully progressed.} +Sep 7 07:52:02.787: INFO: Observed &Deployment event: MODIFIED +Sep 7 07:52:02.787: INFO: Observed Deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-09-07 07:52:00 +0000 UTC 2022-09-07 07:52:00 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Sep 7 07:52:02.787: INFO: Observed Deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-09-07 07:52:00 +0000 UTC 2022-09-07 07:51:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-45w7g-688c4d6789" has successfully progressed.} +Sep 7 07:52:02.787: INFO: Found Deployment test-deployment-45w7g in namespace deployment-7855 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Sep 7 07:52:02.787: INFO: Deployment test-deployment-45w7g has an updated status +STEP: patching the Statefulset Status +Sep 7 07:52:02.787: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Sep 7 07:52:02.798: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched +Sep 7 07:52:02.810: INFO: Observed &Deployment event: ADDED +Sep 7 07:52:02.810: INFO: Observed deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-09-07 07:51:58 +0000 UTC 2022-09-07 07:51:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-45w7g-688c4d6789"} +Sep 7 07:52:02.810: INFO: Observed &Deployment event: MODIFIED +Sep 7 07:52:02.810: INFO: Observed deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-09-07 07:51:58 +0000 UTC 2022-09-07 07:51:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-45w7g-688c4d6789"} +Sep 7 07:52:02.810: INFO: Observed deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-09-07 07:51:58 +0000 UTC 2022-09-07 07:51:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Sep 7 07:52:02.810: INFO: Observed &Deployment event: MODIFIED +Sep 7 07:52:02.810: INFO: Observed deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-09-07 07:51:58 +0000 UTC 2022-09-07 07:51:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Sep 7 07:52:02.810: INFO: Observed deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-09-07 07:51:58 +0000 UTC 2022-09-07 07:51:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-45w7g-688c4d6789" is progressing.} +Sep 7 07:52:02.812: INFO: Observed &Deployment event: MODIFIED +Sep 7 07:52:02.812: INFO: Observed deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-09-07 07:52:00 +0000 UTC 2022-09-07 07:52:00 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Sep 7 07:52:02.812: INFO: Observed deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-09-07 07:52:00 +0000 UTC 2022-09-07 07:51:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-45w7g-688c4d6789" has successfully progressed.} +Sep 7 07:52:02.813: INFO: Observed &Deployment event: MODIFIED +Sep 7 07:52:02.813: INFO: Observed deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-09-07 07:52:00 +0000 UTC 2022-09-07 07:52:00 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Sep 7 07:52:02.813: INFO: Observed deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-09-07 07:52:00 +0000 UTC 2022-09-07 07:51:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-45w7g-688c4d6789" has successfully progressed.} +Sep 7 07:52:02.813: INFO: Observed deployment test-deployment-45w7g in namespace deployment-7855 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Sep 7 07:52:02.813: INFO: Observed &Deployment event: MODIFIED +Sep 7 07:52:02.813: INFO: Found deployment test-deployment-45w7g in namespace deployment-7855 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Sep 7 07:52:02.813: INFO: Deployment test-deployment-45w7g has a patched status +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Sep 7 07:52:02.818: INFO: Deployment "test-deployment-45w7g": +&Deployment{ObjectMeta:{test-deployment-45w7g deployment-7855 0963a006-0d03-4db9-a84c-d65743533755 7802 1 2022-09-07 07:51:58 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-09-07 07:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 07:52:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2022-09-07 07:52:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003fb3a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Sep 7 07:52:02.827: INFO: New ReplicaSet "test-deployment-45w7g-688c4d6789" of Deployment "test-deployment-45w7g": +&ReplicaSet{ObjectMeta:{test-deployment-45w7g-688c4d6789 deployment-7855 6911d763-e23b-4f17-b252-361f1263d0b1 7796 1 2022-09-07 07:51:58 +0000 UTC map[e2e:testing name:httpd pod-template-hash:688c4d6789] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-45w7g 0963a006-0d03-4db9-a84c-d65743533755 0xc003fb3de7 0xc003fb3de8}] [] [{kube-controller-manager Update apps/v1 2022-09-07 07:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0963a006-0d03-4db9-a84c-d65743533755\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 07:52:00 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 688c4d6789,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:688c4d6789] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003fb3e98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Sep 7 07:52:02.842: INFO: Pod "test-deployment-45w7g-688c4d6789-djpw4" is available: +&Pod{ObjectMeta:{test-deployment-45w7g-688c4d6789-djpw4 test-deployment-45w7g-688c4d6789- deployment-7855 6dc93105-b7dd-40db-8797-b73b684950c2 7795 0 2022-09-07 07:51:58 +0000 UTC map[e2e:testing name:httpd pod-template-hash:688c4d6789] map[] [{apps/v1 ReplicaSet test-deployment-45w7g-688c4d6789 6911d763-e23b-4f17-b252-361f1263d0b1 0xc003630b27 0xc003630b28}] [] [{kube-controller-manager Update v1 2022-09-07 07:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6911d763-e23b-4f17-b252-361f1263d0b1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:52:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.75.63\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8jnhw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8jnhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:51:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:172.20.75.63,StartTime:2022-09-07 07:51:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 07:51:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://d65fa4557dd4509dea75691baced2896f0520b9cefd699272d46c7e9fee93a4c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.75.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:188 +Sep 7 07:52:02.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7855" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":356,"completed":60,"skipped":984,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:52:02.893: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should check is all data is printed [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:52:02.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-480 version' +Sep 7 07:52:03.227: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" +Sep 7 07:52:03.227: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"24\", GitVersion:\"v1.24.2\", GitCommit:\"f66044f4361b9f1f96f0053dd46cb7dce5e990a8\", GitTreeState:\"clean\", BuildDate:\"2022-06-15T14:22:29Z\", GoVersion:\"go1.18.3\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.4\nServer Version: version.Info{Major:\"1\", Minor:\"24\", GitVersion:\"v1.24.2\", GitCommit:\"f66044f4361b9f1f96f0053dd46cb7dce5e990a8\", GitTreeState:\"clean\", BuildDate:\"2022-06-15T14:15:38Z\", GoVersion:\"go1.18.3\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 07:52:03.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-480" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":356,"completed":61,"skipped":1016,"failed":0} +SS +------------------------------ +[sig-node] Containers + should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:52:03.240: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test override command +Sep 7 07:52:03.295: INFO: Waiting up to 5m0s for pod "client-containers-ead31ef1-9f83-4c22-83f7-696fa1c79c53" in namespace "containers-8412" to be "Succeeded or Failed" +Sep 7 07:52:03.298: INFO: Pod "client-containers-ead31ef1-9f83-4c22-83f7-696fa1c79c53": Phase="Pending", Reason="", readiness=false. Elapsed: 3.070519ms +Sep 7 07:52:05.319: INFO: Pod "client-containers-ead31ef1-9f83-4c22-83f7-696fa1c79c53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023975876s +Sep 7 07:52:07.339: INFO: Pod "client-containers-ead31ef1-9f83-4c22-83f7-696fa1c79c53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043818197s +STEP: Saw pod success +Sep 7 07:52:07.339: INFO: Pod "client-containers-ead31ef1-9f83-4c22-83f7-696fa1c79c53" satisfied condition "Succeeded or Failed" +Sep 7 07:52:07.353: INFO: Trying to get logs from node 172.31.51.97 pod client-containers-ead31ef1-9f83-4c22-83f7-696fa1c79c53 container agnhost-container: +STEP: delete the pod +Sep 7 07:52:07.388: INFO: Waiting for pod client-containers-ead31ef1-9f83-4c22-83f7-696fa1c79c53 to disappear +Sep 7 07:52:07.404: INFO: Pod client-containers-ead31ef1-9f83-4c22-83f7-696fa1c79c53 no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:188 +Sep 7 07:52:07.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-8412" for this suite. +•{"msg":"PASSED [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]","total":356,"completed":62,"skipped":1018,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:52:07.422: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:52:07.529: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: kubectl validation (kubectl create and apply) allows request with known and required properties +Sep 7 07:52:11.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 --namespace=crd-publish-openapi-4307 create -f -' +Sep 7 07:52:12.797: INFO: stderr: "" +Sep 7 07:52:12.797: INFO: stdout: "e2e-test-crd-publish-openapi-2576-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Sep 7 07:52:12.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 --namespace=crd-publish-openapi-4307 delete e2e-test-crd-publish-openapi-2576-crds test-foo' +Sep 7 07:52:12.912: INFO: stderr: "" +Sep 7 07:52:12.912: INFO: stdout: "e2e-test-crd-publish-openapi-2576-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Sep 7 07:52:12.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 --namespace=crd-publish-openapi-4307 apply -f -' +Sep 7 07:52:13.180: INFO: stderr: "" +Sep 7 07:52:13.180: INFO: stdout: "e2e-test-crd-publish-openapi-2576-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Sep 7 07:52:13.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 --namespace=crd-publish-openapi-4307 delete e2e-test-crd-publish-openapi-2576-crds test-foo' +Sep 7 07:52:13.297: INFO: stderr: "" +Sep 7 07:52:13.297: INFO: stdout: "e2e-test-crd-publish-openapi-2576-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values +Sep 7 07:52:13.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 --namespace=crd-publish-openapi-4307 create -f -' +Sep 7 07:52:13.536: INFO: rc: 1 +STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Sep 7 07:52:13.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 --namespace=crd-publish-openapi-4307 create -f -' +Sep 7 07:52:13.789: INFO: rc: 1 +Sep 7 07:52:13.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 --namespace=crd-publish-openapi-4307 apply -f -' +Sep 7 07:52:14.028: INFO: rc: 1 +STEP: kubectl validation (kubectl create and apply) rejects request without required properties +Sep 7 07:52:14.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 --namespace=crd-publish-openapi-4307 create -f -' +Sep 7 07:52:14.272: INFO: rc: 1 +Sep 7 07:52:14.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 --namespace=crd-publish-openapi-4307 apply -f -' +Sep 7 07:52:14.544: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Sep 7 07:52:14.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 explain e2e-test-crd-publish-openapi-2576-crds' +Sep 7 07:52:14.788: INFO: stderr: "" +Sep 7 07:52:14.788: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2576-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Sep 7 07:52:14.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 explain e2e-test-crd-publish-openapi-2576-crds.metadata' +Sep 7 07:52:15.047: INFO: stderr: "" +Sep 7 07:52:15.047: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2576-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n Deprecated: ClusterName is a legacy field that was always cleared by the\n system and never used; it will be removed completely in 1.25.\n\n The name in the go struct is changed to help clients detect accidental use.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Sep 7 07:52:15.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 explain e2e-test-crd-publish-openapi-2576-crds.spec' +Sep 7 07:52:15.299: INFO: stderr: "" +Sep 7 07:52:15.299: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2576-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Sep 7 07:52:15.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 explain e2e-test-crd-publish-openapi-2576-crds.spec.bars' +Sep 7 07:52:15.551: INFO: stderr: "" +Sep 7 07:52:15.551: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2576-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Sep 7 07:52:15.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-4307 explain e2e-test-crd-publish-openapi-2576-crds.spec.bars2' +Sep 7 07:52:15.818: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 07:52:20.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4307" for this suite. + +• [SLOW TEST:13.493 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD with validation schema [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":356,"completed":63,"skipped":1038,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:52:20.916: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward api env vars +Sep 7 07:52:20.991: INFO: Waiting up to 5m0s for pod "downward-api-fe3fe636-8868-402e-bb87-cf4b3762e2a1" in namespace "downward-api-6783" to be "Succeeded or Failed" +Sep 7 07:52:21.005: INFO: Pod "downward-api-fe3fe636-8868-402e-bb87-cf4b3762e2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.698756ms +Sep 7 07:52:23.017: INFO: Pod "downward-api-fe3fe636-8868-402e-bb87-cf4b3762e2a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025780561s +Sep 7 07:52:25.037: INFO: Pod "downward-api-fe3fe636-8868-402e-bb87-cf4b3762e2a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045787555s +STEP: Saw pod success +Sep 7 07:52:25.037: INFO: Pod "downward-api-fe3fe636-8868-402e-bb87-cf4b3762e2a1" satisfied condition "Succeeded or Failed" +Sep 7 07:52:25.040: INFO: Trying to get logs from node 172.31.51.96 pod downward-api-fe3fe636-8868-402e-bb87-cf4b3762e2a1 container dapi-container: +STEP: delete the pod +Sep 7 07:52:25.058: INFO: Waiting for pod downward-api-fe3fe636-8868-402e-bb87-cf4b3762e2a1 to disappear +Sep 7 07:52:25.068: INFO: Pod downward-api-fe3fe636-8868-402e-bb87-cf4b3762e2a1 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:188 +Sep 7 07:52:25.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6783" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":356,"completed":64,"skipped":1101,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:52:25.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 07:52:25.667: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 07:52:28.700: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + test/e2e/framework/framework.go:652 +STEP: Setting timeout (1s) shorter than webhook latency (5s) +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is longer than webhook latency +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is empty (defaulted to 10s in v1) +STEP: Registering slow webhook via the AdmissionRegistration API +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 07:52:40.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1924" for this suite. +STEP: Destroying namespace "webhook-1924-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:15.934 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should honor timeout [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":356,"completed":65,"skipped":1108,"failed":0} +SSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:52:41.019: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:188 +Sep 7 07:53:10.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-5264" for this suite. + +• [SLOW TEST:29.590 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:43 + when starting a container that exits + test/e2e/common/node/runtime.go:44 + should run with the expected status [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":356,"completed":66,"skipped":1112,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:53:10.610: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:53:10.642: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties +Sep 7 07:53:14.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-1050 --namespace=crd-publish-openapi-1050 create -f -' +Sep 7 07:53:15.263: INFO: stderr: "" +Sep 7 07:53:15.263: INFO: stdout: "e2e-test-crd-publish-openapi-6036-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Sep 7 07:53:15.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-1050 --namespace=crd-publish-openapi-1050 delete e2e-test-crd-publish-openapi-6036-crds test-cr' +Sep 7 07:53:15.372: INFO: stderr: "" +Sep 7 07:53:15.372: INFO: stdout: "e2e-test-crd-publish-openapi-6036-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Sep 7 07:53:15.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-1050 --namespace=crd-publish-openapi-1050 apply -f -' +Sep 7 07:53:16.183: INFO: stderr: "" +Sep 7 07:53:16.183: INFO: stdout: "e2e-test-crd-publish-openapi-6036-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Sep 7 07:53:16.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-1050 --namespace=crd-publish-openapi-1050 delete e2e-test-crd-publish-openapi-6036-crds test-cr' +Sep 7 07:53:16.299: INFO: stderr: "" +Sep 7 07:53:16.299: INFO: stdout: "e2e-test-crd-publish-openapi-6036-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Sep 7 07:53:16.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-1050 explain e2e-test-crd-publish-openapi-6036-crds' +Sep 7 07:53:16.594: INFO: stderr: "" +Sep 7 07:53:16.594: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6036-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 07:53:19.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1050" for this suite. + +• [SLOW TEST:8.945 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":356,"completed":67,"skipped":1129,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:53:19.554: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 07:53:19.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-133be5cc-bf2c-4572-ada0-9b968df6dd79" in namespace "projected-3902" to be "Succeeded or Failed" +Sep 7 07:53:19.605: INFO: Pod "downwardapi-volume-133be5cc-bf2c-4572-ada0-9b968df6dd79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268314ms +Sep 7 07:53:21.611: INFO: Pod "downwardapi-volume-133be5cc-bf2c-4572-ada0-9b968df6dd79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012411735s +Sep 7 07:53:23.624: INFO: Pod "downwardapi-volume-133be5cc-bf2c-4572-ada0-9b968df6dd79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024810834s +STEP: Saw pod success +Sep 7 07:53:23.624: INFO: Pod "downwardapi-volume-133be5cc-bf2c-4572-ada0-9b968df6dd79" satisfied condition "Succeeded or Failed" +Sep 7 07:53:23.628: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-133be5cc-bf2c-4572-ada0-9b968df6dd79 container client-container: +STEP: delete the pod +Sep 7 07:53:23.647: INFO: Waiting for pod downwardapi-volume-133be5cc-bf2c-4572-ada0-9b968df6dd79 to disappear +Sep 7 07:53:23.652: INFO: Pod downwardapi-volume-133be5cc-bf2c-4572-ada0-9b968df6dd79 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 07:53:23.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3902" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":356,"completed":68,"skipped":1135,"failed":0} +SSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:53:23.660: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should support proportional scaling [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:53:23.705: INFO: Creating deployment "webserver-deployment" +Sep 7 07:53:23.709: INFO: Waiting for observed generation 1 +Sep 7 07:53:25.759: INFO: Waiting for all required pods to come up +Sep 7 07:53:25.797: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running +Sep 7 07:53:29.926: INFO: Waiting for deployment "webserver-deployment" to complete +Sep 7 07:53:29.931: INFO: Updating deployment "webserver-deployment" with a non-existent image +Sep 7 07:53:29.940: INFO: Updating deployment webserver-deployment +Sep 7 07:53:29.940: INFO: Waiting for observed generation 2 +Sep 7 07:53:31.974: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Sep 7 07:53:31.978: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Sep 7 07:53:31.984: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Sep 7 07:53:31.993: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Sep 7 07:53:31.993: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Sep 7 07:53:31.999: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Sep 7 07:53:32.009: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Sep 7 07:53:32.009: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Sep 7 07:53:32.135: INFO: Updating deployment webserver-deployment +Sep 7 07:53:32.135: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Sep 7 07:53:32.391: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Sep 7 07:53:34.889: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Sep 7 07:53:34.927: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-7333 e84a11ab-ef95-4f1f-800b-ad80fb250837 8652 3 2022-09-07 07:53:23 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-09-07 07:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0043f1878 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-09-07 07:53:32 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-57ccb67bb8" is progressing.,LastUpdateTime:2022-09-07 07:53:34 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Sep 7 07:53:34.957: INFO: New ReplicaSet "webserver-deployment-57ccb67bb8" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-57ccb67bb8 deployment-7333 8f5d2da5-a1e0-49d5-9958-471008cd9a27 8646 3 2022-09-07 07:53:29 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e84a11ab-ef95-4f1f-800b-ad80fb250837 0xc004423647 0xc004423648}] [] [{kube-controller-manager Update apps/v1 2022-09-07 07:53:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e84a11ab-ef95-4f1f-800b-ad80fb250837\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 57ccb67bb8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044236e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Sep 7 07:53:34.957: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Sep 7 07:53:34.957: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-55df494869 deployment-7333 74515a19-e14d-40bc-98e8-9a68fa29bbea 8630 3 2022-09-07 07:53:23 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e84a11ab-ef95-4f1f-800b-ad80fb250837 0xc004423557 0xc004423558}] [] [{kube-controller-manager Update apps/v1 2022-09-07 07:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e84a11ab-ef95-4f1f-800b-ad80fb250837\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 07:53:26 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 55df494869,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044235e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Sep 7 07:53:35.000: INFO: Pod "webserver-deployment-55df494869-5njgc" is available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-5njgc webserver-deployment-55df494869- deployment-7333 3e7ae935-cf28-46bd-b254-5c8a63ec0d1f 8435 0 2022-09-07 07:53:23 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc0043f1db0 0xc0043f1db1}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.97.89\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j5dr8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j5dr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:172.20.97.89,StartTime:2022-09-07 07:53:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 07:53:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://da5275f888a1365f030c3cdfb26fd0b98c85c7908194ac73410508a83770b351,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.97.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.009: INFO: Pod "webserver-deployment-55df494869-5rwr7" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-5rwr7 webserver-deployment-55df494869- deployment-7333 8fde09a0-4027-4a98-964f-7099ecd9e814 8656 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc0043f1fe0 0xc0043f1fe1}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-95jqf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-95jqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.009: INFO: Pod "webserver-deployment-55df494869-62crz" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-62crz webserver-deployment-55df494869- deployment-7333 7c5f4821-f5b4-4e64-99fe-05185334c04f 8672 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443e247 0xc00443e248}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-btxbj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-btxbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.009: INFO: Pod "webserver-deployment-55df494869-6mlh8" is available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-6mlh8 webserver-deployment-55df494869- deployment-7333 460f6ddb-8ae1-4a1d-a6f2-3856b9984d0b 8425 0 2022-09-07 07:53:23 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443e4d7 0xc00443e4d8}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.97.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kbrdb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kbrdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:172.20.97.91,StartTime:2022-09-07 07:53:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 07:53:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0099f430221357b42d2d46fc53416235f0bde04aa1643448a5b8e58784f3c298,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.97.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.010: INFO: Pod "webserver-deployment-55df494869-7889v" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-7889v webserver-deployment-55df494869- deployment-7333 b53f8242-0d7a-4598-a4f4-11dc9d5dad97 8651 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443e730 0xc00443e731}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g89p2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g89p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.016: INFO: Pod "webserver-deployment-55df494869-8z9qr" is available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-8z9qr webserver-deployment-55df494869- deployment-7333 92dbb8c7-fd2b-4b8f-9c17-c22a1a4cc172 8429 0 2022-09-07 07:53:23 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443e8f7 0xc00443e8f8}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.97.90\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zkft4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkft4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:172.20.97.90,StartTime:2022-09-07 07:53:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 07:53:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://e6d34c8979dcd4c145849b3fc14204e8500c23166252c855bb5b1ceb62fa9996,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.97.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.018: INFO: Pod "webserver-deployment-55df494869-bc5rx" is available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-bc5rx webserver-deployment-55df494869- deployment-7333 16cf60e6-e48d-4a37-9050-0ebb9937fcf4 8465 0 2022-09-07 07:53:23 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443eae0 0xc00443eae1}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.75.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c7srn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c7srn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:172.20.75.12,StartTime:2022-09-07 07:53:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 07:53:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://c10112a29b89a7d53f0cb2a60063baa3e5312640fa718d7408ad821b235a9f28,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.75.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.018: INFO: Pod "webserver-deployment-55df494869-bhvfs" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-bhvfs webserver-deployment-55df494869- deployment-7333 77cb521f-3237-4ca9-ab2b-bf10768deb45 8641 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443ed70 0xc00443ed71}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-57wvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-57wvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.024: INFO: Pod "webserver-deployment-55df494869-czqkq" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-czqkq webserver-deployment-55df494869- deployment-7333 975a8be5-7843-4967-a43d-1df20f16d7ec 8632 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443ef67 0xc00443ef68}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-27dn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27dn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.025: INFO: Pod "webserver-deployment-55df494869-dtk2t" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-dtk2t webserver-deployment-55df494869- deployment-7333 ca5c0b59-fbbe-4376-8db8-20672b9ef7c6 8665 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443f1c7 0xc00443f1c8}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s29fq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s29fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.025: INFO: Pod "webserver-deployment-55df494869-dtw8n" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-dtw8n webserver-deployment-55df494869- deployment-7333 7035b88c-1cab-4acc-aa33-bce9d2bbb671 8668 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443f427 0xc00443f428}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kdpql,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kdpql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.025: INFO: Pod "webserver-deployment-55df494869-h4x8h" is available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-h4x8h webserver-deployment-55df494869- deployment-7333 92b2f40d-d01a-432b-86fb-f4a5a2f7e7b0 8431 0 2022-09-07 07:53:23 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443f6e7 0xc00443f6e8}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.97.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-968hj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-968hj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:172.20.97.92,StartTime:2022-09-07 07:53:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 07:53:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://43526450e0338184af85085d7b3e759353d24d8055209107747305e87637aa55,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.97.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.042: INFO: Pod "webserver-deployment-55df494869-hthhl" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-hthhl webserver-deployment-55df494869- deployment-7333 6f54ce94-3285-474d-9fa2-a9814d07c1c3 8593 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443f990 0xc00443f991}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q6vjn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6vjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.043: INFO: Pod "webserver-deployment-55df494869-ldkmx" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-ldkmx webserver-deployment-55df494869- deployment-7333 ec202de8-6d96-4470-a580-3e8bb4d87d7b 8571 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443fc07 0xc00443fc08}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tz7p4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tz7p4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.043: INFO: Pod "webserver-deployment-55df494869-lzxj7" is available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-lzxj7 webserver-deployment-55df494869- deployment-7333 069c1ef5-536c-4721-8521-816653c74d63 8423 0 2022-09-07 07:53:23 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00443fea7 0xc00443fea8}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.97.88\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xw6d9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xw6d9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:172.20.97.88,StartTime:2022-09-07 07:53:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 07:53:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://bcba381704f460a3397ed9c83c8ccfd3a18b388918d9432002bb91b83a672b0c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.97.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.043: INFO: Pod "webserver-deployment-55df494869-pq6qt" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-pq6qt webserver-deployment-55df494869- deployment-7333 f5f5b750-054f-4b97-a9ea-9bd7999dedde 8660 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00446a100 0xc00446a101}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-49cgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-49cgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.043: INFO: Pod "webserver-deployment-55df494869-rg2nj" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-rg2nj webserver-deployment-55df494869- deployment-7333 ddc826cc-69f8-4c40-9603-b90d4848737a 8678 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00446a317 0xc00446a318}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lmrs9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lmrs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.043: INFO: Pod "webserver-deployment-55df494869-vswj4" is not available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-vswj4 webserver-deployment-55df494869- deployment-7333 88f25423-23f0-481b-a58e-2ca1d5ddd177 8626 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00446a517 0xc00446a518}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-78t9s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-78t9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.044: INFO: Pod "webserver-deployment-55df494869-x2k8p" is available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-x2k8p webserver-deployment-55df494869- deployment-7333 1dfaff49-1618-40f6-b2e9-fb27551d2893 8468 0 2022-09-07 07:53:23 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00446a6f7 0xc00446a6f8}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.75.13\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x9v5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x9v5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:172.20.75.13,StartTime:2022-09-07 07:53:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 07:53:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://8d677c314e183a0d633ef1d2b706ebeb17f284cab7e1383da024c2f1d1d00655,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.75.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.044: INFO: Pod "webserver-deployment-55df494869-xgs2c" is available: +&Pod{ObjectMeta:{webserver-deployment-55df494869-xgs2c webserver-deployment-55df494869- deployment-7333 c1524352-2677-4ae8-a8ce-13eede34da82 8459 0 2022-09-07 07:53:23 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet webserver-deployment-55df494869 74515a19-e14d-40bc-98e8-9a68fa29bbea 0xc00446a930 0xc00446a931}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74515a19-e14d-40bc-98e8-9a68fa29bbea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.75.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sb4fj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sb4fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:172.20.75.9,StartTime:2022-09-07 07:53:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 07:53:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://f6b274ef5001eb15a4893f4ae694bcdebbdf804805422ab08fddd6423c73660a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.75.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.044: INFO: Pod "webserver-deployment-57ccb67bb8-5bb4t" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-5bb4t webserver-deployment-57ccb67bb8- deployment-7333 a977fa9e-0a5d-4ce8-b1dd-5c66bf0c121c 8586 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc00446ab60 0xc00446ab61}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w9b98,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9b98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.051: INFO: Pod "webserver-deployment-57ccb67bb8-7t2nh" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-7t2nh webserver-deployment-57ccb67bb8- deployment-7333 d50db4c8-9418-46db-bb1a-ef51b3cd1fab 8659 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc00446ae47 0xc00446ae48}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t6ln7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6ln7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.052: INFO: Pod "webserver-deployment-57ccb67bb8-b6ptv" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-b6ptv webserver-deployment-57ccb67bb8- deployment-7333 2c86a713-1f0d-4f65-b117-c3cc200c3b33 8505 0 2022-09-07 07:53:30 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc00446b0a7 0xc00446b0a8}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p9jg9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9jg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.052: INFO: Pod "webserver-deployment-57ccb67bb8-grhxj" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-grhxj webserver-deployment-57ccb67bb8- deployment-7333 4b75b724-a782-4964-a2f2-561f4b294267 8531 0 2022-09-07 07:53:30 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc00446b2f7 0xc00446b2f8}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gsr6v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gsr6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:,StartTime:2022-09-07 07:53:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.052: INFO: Pod "webserver-deployment-57ccb67bb8-hcv9z" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-hcv9z webserver-deployment-57ccb67bb8- deployment-7333 920a6183-0021-4f86-9137-f27b1c5fd46c 8657 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc00446b547 0xc00446b548}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zxndh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxndh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.052: INFO: Pod "webserver-deployment-57ccb67bb8-hwgj5" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-hwgj5 webserver-deployment-57ccb67bb8- deployment-7333 cd444574-a551-4056-9d7f-462874c7e84b 8639 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc00446b737 0xc00446b738}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-swhp9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swhp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.052: INFO: Pod "webserver-deployment-57ccb67bb8-kpq9q" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-kpq9q webserver-deployment-57ccb67bb8- deployment-7333 35eda67a-d27e-4906-8f1a-27eaa521ecb8 8654 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc00446b927 0xc00446b928}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zdkv9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zdkv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.053: INFO: Pod "webserver-deployment-57ccb67bb8-kzfj8" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-kzfj8 webserver-deployment-57ccb67bb8- deployment-7333 3b31dd5d-f725-4dab-a0be-901882bc2461 8663 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc00446bb57 0xc00446bb58}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2bkz9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2bkz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.056: INFO: Pod "webserver-deployment-57ccb67bb8-lfmnz" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-lfmnz webserver-deployment-57ccb67bb8- deployment-7333 3f787ffa-1b3b-4709-af9b-cd1015f4fc99 8498 0 2022-09-07 07:53:30 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc00446bd87 0xc00446bd88}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6vzhv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vzhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.057: INFO: Pod "webserver-deployment-57ccb67bb8-r2xqj" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-r2xqj webserver-deployment-57ccb67bb8- deployment-7333 d68efd51-2717-4bc0-b9a8-20070be2b04d 8677 0 2022-09-07 07:53:30 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc004488007 0xc004488008}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.97.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hwvmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hwvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:172.20.97.93,StartTime:2022-09-07 07:53:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.97.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.057: INFO: Pod "webserver-deployment-57ccb67bb8-rcgbz" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-rcgbz webserver-deployment-57ccb67bb8- deployment-7333 c23d2a09-bf3b-45e9-b686-6aa9532ab033 8529 0 2022-09-07 07:53:30 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc004488300 0xc004488301}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tp9m9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tp9m9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.057: INFO: Pod "webserver-deployment-57ccb67bb8-wr5pq" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-wr5pq webserver-deployment-57ccb67bb8- deployment-7333 1af4de8a-3312-4e6f-804c-ed0ab0931c52 8650 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc0044885a7 0xc0044885a8}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7sjjn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7sjjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 07:53:35.057: INFO: Pod "webserver-deployment-57ccb67bb8-ww6zm" is not available: +&Pod{ObjectMeta:{webserver-deployment-57ccb67bb8-ww6zm webserver-deployment-57ccb67bb8- deployment-7333 4ec5c5ef-8123-42a4-93fb-f83e8f0a8e13 8667 0 2022-09-07 07:53:32 +0000 UTC map[name:httpd pod-template-hash:57ccb67bb8] map[] [{apps/v1 ReplicaSet webserver-deployment-57ccb67bb8 8f5d2da5-a1e0-49d5-9958-471008cd9a27 0xc004488847 0xc004488848}] [] [{kube-controller-manager Update v1 2022-09-07 07:53:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f5d2da5-a1e0-49d5-9958-471008cd9a27\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 07:53:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-69pvb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-69pvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 07:53:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 07:53:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:188 +Sep 7 07:53:35.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7333" for this suite. + +• [SLOW TEST:11.454 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should support proportional scaling [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":356,"completed":69,"skipped":1139,"failed":0} +SSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:53:35.115: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] updates the published spec when one version gets renamed [Conformance] + test/e2e/framework/framework.go:652 +STEP: set up a multi version CRD +Sep 7 07:53:35.213: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: rename a version +STEP: check the new version name is served +STEP: check the old version name is removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 07:54:10.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-183" for this suite. + +• [SLOW TEST:35.751 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + updates the published spec when one version gets renamed [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":356,"completed":70,"skipped":1142,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:54:10.866: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Performing setup for networking test in namespace pod-network-test-6535 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Sep 7 07:54:10.931: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Sep 7 07:54:10.991: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:54:13.011: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:54:15.005: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:54:17.003: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:54:19.019: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:54:21.003: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:54:22.999: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:54:24.996: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:54:27.010: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:54:29.008: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:54:31.001: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 07:54:33.002: INFO: The status of Pod netserver-0 is Running (Ready = true) +Sep 7 07:54:33.008: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Sep 7 07:54:35.082: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Sep 7 07:54:35.082: INFO: Breadth first check of 172.20.75.28 on host 172.31.51.96... +Sep 7 07:54:35.100: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.75.31:9080/dial?request=hostname&protocol=udp&host=172.20.75.28&port=8081&tries=1'] Namespace:pod-network-test-6535 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 07:54:35.100: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 07:54:35.101: INFO: ExecWithOptions: Clientset creation +Sep 7 07:54:35.108: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/pod-network-test-6535/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.20.75.31%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.20.75.28%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Sep 7 07:54:35.190: INFO: Waiting for responses: map[] +Sep 7 07:54:35.190: INFO: reached 172.20.75.28 after 0/1 tries +Sep 7 07:54:35.190: INFO: Breadth first check of 172.20.97.104 on host 172.31.51.97... +Sep 7 07:54:35.196: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.75.31:9080/dial?request=hostname&protocol=udp&host=172.20.97.104&port=8081&tries=1'] Namespace:pod-network-test-6535 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 07:54:35.196: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 07:54:35.197: INFO: ExecWithOptions: Clientset creation +Sep 7 07:54:35.197: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/pod-network-test-6535/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.20.75.31%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D172.20.97.104%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Sep 7 07:54:35.274: INFO: Waiting for responses: map[] +Sep 7 07:54:35.274: INFO: reached 172.20.97.104 after 0/1 tries +Sep 7 07:54:35.274: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:188 +Sep 7 07:54:35.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-6535" for this suite. + +• [SLOW TEST:24.420 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":356,"completed":71,"skipped":1169,"failed":0} +SS +------------------------------ +[sig-node] RuntimeClass + should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:54:35.287: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename runtimeclass +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:188 +Sep 7 07:54:35.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-3932" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]","total":356,"completed":72,"skipped":1171,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:54:35.348: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not conflict [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:54:35.407: INFO: The status of Pod pod-secrets-ef44cb46-dd62-4d1c-b43c-87bffe6223a5 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:54:37.418: INFO: The status of Pod pod-secrets-ef44cb46-dd62-4d1c-b43c-87bffe6223a5 is Running (Ready = true) +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:188 +Sep 7 07:54:37.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-5341" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":356,"completed":73,"skipped":1286,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:54:37.457: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 07:54:37.994: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 07:54:41.024: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + test/e2e/framework/framework.go:652 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API +STEP: create a configmap that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 07:54:41.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9505" for this suite. +STEP: Destroying namespace "webhook-9505-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":356,"completed":74,"skipped":1330,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:54:41.359: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename certificates +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support CSR API operations [Conformance] + test/e2e/framework/framework.go:652 +STEP: getting /apis +STEP: getting /apis/certificates.k8s.io +STEP: getting /apis/certificates.k8s.io/v1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Sep 7 07:54:44.245: INFO: starting watch +STEP: patching +STEP: updating +Sep 7 07:54:44.295: INFO: waiting for watch events with expected annotations +Sep 7 07:54:44.295: INFO: saw patched and updated annotations +STEP: getting /approval +STEP: patching /approval +STEP: updating /approval +STEP: getting /status +STEP: patching /status +STEP: updating /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 07:54:44.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-3787" for this suite. +•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":356,"completed":75,"skipped":1399,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:54:44.371: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Sep 7 07:54:44.422: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6072 a98435df-70a6-43eb-af0b-40168c5e58e0 9390 0 2022-09-07 07:54:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-09-07 07:54:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 07:54:44.422: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6072 a98435df-70a6-43eb-af0b-40168c5e58e0 9391 0 2022-09-07 07:54:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-09-07 07:54:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 07:54:44.422: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6072 a98435df-70a6-43eb-af0b-40168c5e58e0 9392 0 2022-09-07 07:54:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-09-07 07:54:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Sep 7 07:54:54.462: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6072 a98435df-70a6-43eb-af0b-40168c5e58e0 9436 0 2022-09-07 07:54:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-09-07 07:54:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 07:54:54.462: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6072 a98435df-70a6-43eb-af0b-40168c5e58e0 9437 0 2022-09-07 07:54:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-09-07 07:54:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 07:54:54.462: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6072 a98435df-70a6-43eb-af0b-40168c5e58e0 9438 0 2022-09-07 07:54:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-09-07 07:54:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:188 +Sep 7 07:54:54.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-6072" for this suite. + +• [SLOW TEST:10.100 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":356,"completed":76,"skipped":1463,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:54:54.472: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:54:54.578: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"83169aa1-c276-44c6-92da-4f4bc27e7f37", Controller:(*bool)(0xc000f57026), BlockOwnerDeletion:(*bool)(0xc000f57027)}} +Sep 7 07:54:54.585: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2b8e7474-cef4-44eb-a647-f04ff245d5d5", Controller:(*bool)(0xc003fb3fae), BlockOwnerDeletion:(*bool)(0xc003fb3faf)}} +Sep 7 07:54:54.597: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c550e1ba-e3de-4480-812c-54b509b512a6", Controller:(*bool)(0xc0036301e6), BlockOwnerDeletion:(*bool)(0xc0036301e7)}} +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:188 +Sep 7 07:54:59.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-8423" for this suite. + +• [SLOW TEST:5.188 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should not be blocked by dependency circle [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":356,"completed":77,"skipped":1467,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:54:59.660: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:61 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod liveness-236ad67f-4c14-4408-8d40-aa46842c33d6 in namespace container-probe-9317 +Sep 7 07:55:01.722: INFO: Started pod liveness-236ad67f-4c14-4408-8d40-aa46842c33d6 in namespace container-probe-9317 +STEP: checking the pod's current state and verifying that restartCount is present +Sep 7 07:55:01.725: INFO: Initial restart count of pod liveness-236ad67f-4c14-4408-8d40-aa46842c33d6 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:188 +Sep 7 07:59:03.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9317" for this suite. + +• [SLOW TEST:243.453 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":356,"completed":78,"skipped":1475,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:59:03.113: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:48 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:59:03.174: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-bd035465-7d12-43bf-bc17-2c58cdea2815" in namespace "security-context-test-6614" to be "Succeeded or Failed" +Sep 7 07:59:03.197: INFO: Pod "busybox-readonly-false-bd035465-7d12-43bf-bc17-2c58cdea2815": Phase="Pending", Reason="", readiness=false. Elapsed: 22.858442ms +Sep 7 07:59:05.218: INFO: Pod "busybox-readonly-false-bd035465-7d12-43bf-bc17-2c58cdea2815": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043944779s +Sep 7 07:59:07.240: INFO: Pod "busybox-readonly-false-bd035465-7d12-43bf-bc17-2c58cdea2815": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066031131s +Sep 7 07:59:09.256: INFO: Pod "busybox-readonly-false-bd035465-7d12-43bf-bc17-2c58cdea2815": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081926816s +Sep 7 07:59:09.256: INFO: Pod "busybox-readonly-false-bd035465-7d12-43bf-bc17-2c58cdea2815" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:188 +Sep 7 07:59:09.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-6614" for this suite. + +• [SLOW TEST:6.183 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a pod with readOnlyRootFilesystem + test/e2e/common/node/security_context.go:173 + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":356,"completed":79,"skipped":1483,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:59:09.297: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:191 +[It] should be updated [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Sep 7 07:59:09.441: INFO: The status of Pod pod-update-eecf737f-9575-4995-8bf8-2aa5c57b2e31 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:59:11.448: INFO: The status of Pod pod-update-eecf737f-9575-4995-8bf8-2aa5c57b2e31 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Sep 7 07:59:11.968: INFO: Successfully updated pod "pod-update-eecf737f-9575-4995-8bf8-2aa5c57b2e31" +STEP: verifying the updated pod is in kubernetes +Sep 7 07:59:11.999: INFO: Pod update OK +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:188 +Sep 7 07:59:11.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-7412" for this suite. +•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":356,"completed":80,"skipped":1533,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:59:12.017: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 07:59:12.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-527d13cf-7be8-4047-994f-f21457986fda" in namespace "downward-api-780" to be "Succeeded or Failed" +Sep 7 07:59:12.094: INFO: Pod "downwardapi-volume-527d13cf-7be8-4047-994f-f21457986fda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.002545ms +Sep 7 07:59:14.137: INFO: Pod "downwardapi-volume-527d13cf-7be8-4047-994f-f21457986fda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048965881s +Sep 7 07:59:16.141: INFO: Pod "downwardapi-volume-527d13cf-7be8-4047-994f-f21457986fda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052717126s +STEP: Saw pod success +Sep 7 07:59:16.141: INFO: Pod "downwardapi-volume-527d13cf-7be8-4047-994f-f21457986fda" satisfied condition "Succeeded or Failed" +Sep 7 07:59:16.145: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-527d13cf-7be8-4047-994f-f21457986fda container client-container: +STEP: delete the pod +Sep 7 07:59:16.173: INFO: Waiting for pod downwardapi-volume-527d13cf-7be8-4047-994f-f21457986fda to disappear +Sep 7 07:59:16.178: INFO: Pod downwardapi-volume-527d13cf-7be8-4047-994f-f21457986fda no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 07:59:16.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-780" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":356,"completed":81,"skipped":1543,"failed":0} +S +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:59:16.186: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 07:59:16.218: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties +Sep 7 07:59:20.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-1988 --namespace=crd-publish-openapi-1988 create -f -' +Sep 7 07:59:21.489: INFO: stderr: "" +Sep 7 07:59:21.489: INFO: stdout: "e2e-test-crd-publish-openapi-6261-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Sep 7 07:59:21.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-1988 --namespace=crd-publish-openapi-1988 delete e2e-test-crd-publish-openapi-6261-crds test-cr' +Sep 7 07:59:21.599: INFO: stderr: "" +Sep 7 07:59:21.599: INFO: stdout: "e2e-test-crd-publish-openapi-6261-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Sep 7 07:59:21.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-1988 --namespace=crd-publish-openapi-1988 apply -f -' +Sep 7 07:59:21.857: INFO: stderr: "" +Sep 7 07:59:21.857: INFO: stdout: "e2e-test-crd-publish-openapi-6261-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Sep 7 07:59:21.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-1988 --namespace=crd-publish-openapi-1988 delete e2e-test-crd-publish-openapi-6261-crds test-cr' +Sep 7 07:59:21.977: INFO: stderr: "" +Sep 7 07:59:21.977: INFO: stdout: "e2e-test-crd-publish-openapi-6261-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Sep 7 07:59:21.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-1988 explain e2e-test-crd-publish-openapi-6261-crds' +Sep 7 07:59:22.222: INFO: stderr: "" +Sep 7 07:59:22.222: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6261-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 07:59:25.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1988" for this suite. + +• [SLOW TEST:9.158 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD without validation schema [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":356,"completed":82,"skipped":1544,"failed":0} +S +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:59:25.344: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename security-context +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Sep 7 07:59:25.386: INFO: Waiting up to 5m0s for pod "security-context-3a00f412-7f4a-4ee3-99ab-002c4260e9ce" in namespace "security-context-5868" to be "Succeeded or Failed" +Sep 7 07:59:25.391: INFO: Pod "security-context-3a00f412-7f4a-4ee3-99ab-002c4260e9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 5.15734ms +Sep 7 07:59:27.402: INFO: Pod "security-context-3a00f412-7f4a-4ee3-99ab-002c4260e9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015834454s +Sep 7 07:59:29.408: INFO: Pod "security-context-3a00f412-7f4a-4ee3-99ab-002c4260e9ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022181373s +STEP: Saw pod success +Sep 7 07:59:29.408: INFO: Pod "security-context-3a00f412-7f4a-4ee3-99ab-002c4260e9ce" satisfied condition "Succeeded or Failed" +Sep 7 07:59:29.411: INFO: Trying to get logs from node 172.31.51.96 pod security-context-3a00f412-7f4a-4ee3-99ab-002c4260e9ce container test-container: +STEP: delete the pod +Sep 7 07:59:29.431: INFO: Waiting for pod security-context-3a00f412-7f4a-4ee3-99ab-002c4260e9ce to disappear +Sep 7 07:59:29.439: INFO: Pod security-context-3a00f412-7f4a-4ee3-99ab-002c4260e9ce no longer exists +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:188 +Sep 7 07:59:29.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-5868" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":356,"completed":83,"skipped":1545,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:59:29.448: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 07:59:29.517: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d73455ff-ffa1-4ad7-a725-4862f3620dce" in namespace "downward-api-9156" to be "Succeeded or Failed" +Sep 7 07:59:29.521: INFO: Pod "downwardapi-volume-d73455ff-ffa1-4ad7-a725-4862f3620dce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.751593ms +Sep 7 07:59:31.530: INFO: Pod "downwardapi-volume-d73455ff-ffa1-4ad7-a725-4862f3620dce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012902386s +Sep 7 07:59:33.535: INFO: Pod "downwardapi-volume-d73455ff-ffa1-4ad7-a725-4862f3620dce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017421391s +Sep 7 07:59:35.549: INFO: Pod "downwardapi-volume-d73455ff-ffa1-4ad7-a725-4862f3620dce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031196167s +STEP: Saw pod success +Sep 7 07:59:35.549: INFO: Pod "downwardapi-volume-d73455ff-ffa1-4ad7-a725-4862f3620dce" satisfied condition "Succeeded or Failed" +Sep 7 07:59:35.551: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-d73455ff-ffa1-4ad7-a725-4862f3620dce container client-container: +STEP: delete the pod +Sep 7 07:59:35.586: INFO: Waiting for pod downwardapi-volume-d73455ff-ffa1-4ad7-a725-4862f3620dce to disappear +Sep 7 07:59:35.601: INFO: Pod downwardapi-volume-d73455ff-ffa1-4ad7-a725-4862f3620dce no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 07:59:35.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9156" for this suite. + +• [SLOW TEST:6.159 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":84,"skipped":1551,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:59:35.607: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating the pod +Sep 7 07:59:35.672: INFO: The status of Pod labelsupdatea1e3b5e1-6361-484f-8825-85215dc7b1de is Pending, waiting for it to be Running (with Ready = true) +Sep 7 07:59:37.697: INFO: The status of Pod labelsupdatea1e3b5e1-6361-484f-8825-85215dc7b1de is Running (Ready = true) +Sep 7 07:59:38.225: INFO: Successfully updated pod "labelsupdatea1e3b5e1-6361-484f-8825-85215dc7b1de" +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 07:59:42.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-347" for this suite. + +• [SLOW TEST:6.656 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":356,"completed":85,"skipped":1561,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 07:59:42.264: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:61 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod busybox-646a8666-ba38-43b6-bf19-909820222ce3 in namespace container-probe-2472 +Sep 7 07:59:44.339: INFO: Started pod busybox-646a8666-ba38-43b6-bf19-909820222ce3 in namespace container-probe-2472 +STEP: checking the pod's current state and verifying that restartCount is present +Sep 7 07:59:44.342: INFO: Initial restart count of pod busybox-646a8666-ba38-43b6-bf19-909820222ce3 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:188 +Sep 7 08:03:45.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2472" for this suite. + +• [SLOW TEST:243.394 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":356,"completed":86,"skipped":1623,"failed":0} +SSSSSS +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:03:45.658: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename security-context +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Sep 7 08:03:45.738: INFO: Waiting up to 5m0s for pod "security-context-1b1c7d86-64de-4063-854d-ab01c21f087b" in namespace "security-context-9950" to be "Succeeded or Failed" +Sep 7 08:03:45.740: INFO: Pod "security-context-1b1c7d86-64de-4063-854d-ab01c21f087b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.790838ms +Sep 7 08:03:47.750: INFO: Pod "security-context-1b1c7d86-64de-4063-854d-ab01c21f087b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012372689s +Sep 7 08:03:49.757: INFO: Pod "security-context-1b1c7d86-64de-4063-854d-ab01c21f087b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019425159s +STEP: Saw pod success +Sep 7 08:03:49.757: INFO: Pod "security-context-1b1c7d86-64de-4063-854d-ab01c21f087b" satisfied condition "Succeeded or Failed" +Sep 7 08:03:49.760: INFO: Trying to get logs from node 172.31.51.96 pod security-context-1b1c7d86-64de-4063-854d-ab01c21f087b container test-container: +STEP: delete the pod +Sep 7 08:03:49.791: INFO: Waiting for pod security-context-1b1c7d86-64de-4063-854d-ab01c21f087b to disappear +Sep 7 08:03:49.794: INFO: Pod security-context-1b1c7d86-64de-4063-854d-ab01c21f087b no longer exists +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:188 +Sep 7 08:03:49.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-9950" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":356,"completed":87,"skipped":1629,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:03:49.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:03:50.436: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Sep 7 08:03:52.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 3, 50, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 3, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 3, 50, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 3, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:03:55.475: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod +STEP: 'kubectl attach' the pod, should be denied by the webhook +Sep 7 08:03:57.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=webhook-9048 attach --namespace=webhook-9048 to-be-attached-pod -i -c=container1' +Sep 7 08:03:57.638: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:03:57.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9048" for this suite. +STEP: Destroying namespace "webhook-9048-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:7.928 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny attaching pod [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":356,"completed":88,"skipped":1643,"failed":0} +S +------------------------------ +[sig-apps] Job + should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:03:57.729: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating Indexed job +STEP: Ensuring job reaches completions +STEP: Ensuring pods with index for job exist +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:188 +Sep 7 08:04:11.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-6997" for this suite. + +• [SLOW TEST:14.181 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]","total":356,"completed":89,"skipped":1644,"failed":0} +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:04:11.910: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 08:04:11.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1532359b-1bbe-489a-ac46-f6799e710449" in namespace "downward-api-7011" to be "Succeeded or Failed" +Sep 7 08:04:12.021: INFO: Pod "downwardapi-volume-1532359b-1bbe-489a-ac46-f6799e710449": Phase="Pending", Reason="", readiness=false. Elapsed: 57.029948ms +Sep 7 08:04:14.033: INFO: Pod "downwardapi-volume-1532359b-1bbe-489a-ac46-f6799e710449": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069121346s +Sep 7 08:04:16.039: INFO: Pod "downwardapi-volume-1532359b-1bbe-489a-ac46-f6799e710449": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074672906s +STEP: Saw pod success +Sep 7 08:04:16.039: INFO: Pod "downwardapi-volume-1532359b-1bbe-489a-ac46-f6799e710449" satisfied condition "Succeeded or Failed" +Sep 7 08:04:16.043: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-1532359b-1bbe-489a-ac46-f6799e710449 container client-container: +STEP: delete the pod +Sep 7 08:04:16.065: INFO: Waiting for pod downwardapi-volume-1532359b-1bbe-489a-ac46-f6799e710449 to disappear +Sep 7 08:04:16.069: INFO: Pod downwardapi-volume-1532359b-1bbe-489a-ac46-f6799e710449 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 08:04:16.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7011" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":356,"completed":90,"skipped":1644,"failed":0} +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:04:16.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide podname only [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 08:04:16.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a7424a1-a31c-4f2b-8778-c93ae936ca70" in namespace "projected-630" to be "Succeeded or Failed" +Sep 7 08:04:16.152: INFO: Pod "downwardapi-volume-5a7424a1-a31c-4f2b-8778-c93ae936ca70": Phase="Pending", Reason="", readiness=false. Elapsed: 11.531754ms +Sep 7 08:04:18.166: INFO: Pod "downwardapi-volume-5a7424a1-a31c-4f2b-8778-c93ae936ca70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025653526s +Sep 7 08:04:20.192: INFO: Pod "downwardapi-volume-5a7424a1-a31c-4f2b-8778-c93ae936ca70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051686862s +Sep 7 08:04:22.205: INFO: Pod "downwardapi-volume-5a7424a1-a31c-4f2b-8778-c93ae936ca70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064255626s +STEP: Saw pod success +Sep 7 08:04:22.205: INFO: Pod "downwardapi-volume-5a7424a1-a31c-4f2b-8778-c93ae936ca70" satisfied condition "Succeeded or Failed" +Sep 7 08:04:22.209: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-5a7424a1-a31c-4f2b-8778-c93ae936ca70 container client-container: +STEP: delete the pod +Sep 7 08:04:22.236: INFO: Waiting for pod downwardapi-volume-5a7424a1-a31c-4f2b-8778-c93ae936ca70 to disappear +Sep 7 08:04:22.244: INFO: Pod downwardapi-volume-5a7424a1-a31c-4f2b-8778-c93ae936ca70 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 08:04:22.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-630" for this suite. + +• [SLOW TEST:6.176 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide podname only [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":356,"completed":91,"skipped":1647,"failed":0} +SS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:04:22.254: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 08:04:22.304: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be3f5125-ad7d-4601-b8f9-87b1802dcf86" in namespace "projected-5504" to be "Succeeded or Failed" +Sep 7 08:04:22.310: INFO: Pod "downwardapi-volume-be3f5125-ad7d-4601-b8f9-87b1802dcf86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.322872ms +Sep 7 08:04:24.323: INFO: Pod "downwardapi-volume-be3f5125-ad7d-4601-b8f9-87b1802dcf86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018614054s +Sep 7 08:04:26.330: INFO: Pod "downwardapi-volume-be3f5125-ad7d-4601-b8f9-87b1802dcf86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026279281s +STEP: Saw pod success +Sep 7 08:04:26.330: INFO: Pod "downwardapi-volume-be3f5125-ad7d-4601-b8f9-87b1802dcf86" satisfied condition "Succeeded or Failed" +Sep 7 08:04:26.335: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-be3f5125-ad7d-4601-b8f9-87b1802dcf86 container client-container: +STEP: delete the pod +Sep 7 08:04:26.360: INFO: Waiting for pod downwardapi-volume-be3f5125-ad7d-4601-b8f9-87b1802dcf86 to disappear +Sep 7 08:04:26.363: INFO: Pod downwardapi-volume-be3f5125-ad7d-4601-b8f9-87b1802dcf86 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 08:04:26.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5504" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":356,"completed":92,"skipped":1649,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:04:26.374: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:191 +[It] should be submitted and removed [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Sep 7 08:04:26.438: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying pod deletion was observed +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:188 +Sep 7 08:04:30.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4223" for this suite. +•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":356,"completed":93,"skipped":1657,"failed":0} +SSSSSS +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:04:30.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:191 +[It] should get a host IP [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating pod +Sep 7 08:04:31.044: INFO: The status of Pod pod-hostip-7c0ff229-0f03-40cf-a2af-418bf17b4f3a is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:04:33.055: INFO: The status of Pod pod-hostip-7c0ff229-0f03-40cf-a2af-418bf17b4f3a is Running (Ready = true) +Sep 7 08:04:33.061: INFO: Pod pod-hostip-7c0ff229-0f03-40cf-a2af-418bf17b4f3a has hostIP: 172.31.51.96 +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:188 +Sep 7 08:04:33.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1200" for this suite. +•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":356,"completed":94,"skipped":1663,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:04:33.070: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating projection with secret that has name projected-secret-test-map-c0f9cf2b-b169-48a7-b6ac-6f3fc7782b6f +STEP: Creating a pod to test consume secrets +Sep 7 08:04:33.127: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dee92215-7439-4ade-9348-d5d5ecdea882" in namespace "projected-1849" to be "Succeeded or Failed" +Sep 7 08:04:33.152: INFO: Pod "pod-projected-secrets-dee92215-7439-4ade-9348-d5d5ecdea882": Phase="Pending", Reason="", readiness=false. Elapsed: 24.522449ms +Sep 7 08:04:35.174: INFO: Pod "pod-projected-secrets-dee92215-7439-4ade-9348-d5d5ecdea882": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046554709s +Sep 7 08:04:37.219: INFO: Pod "pod-projected-secrets-dee92215-7439-4ade-9348-d5d5ecdea882": Phase="Running", Reason="", readiness=false. Elapsed: 4.09150633s +Sep 7 08:04:39.236: INFO: Pod "pod-projected-secrets-dee92215-7439-4ade-9348-d5d5ecdea882": Phase="Running", Reason="", readiness=false. Elapsed: 6.109163215s +Sep 7 08:04:41.263: INFO: Pod "pod-projected-secrets-dee92215-7439-4ade-9348-d5d5ecdea882": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.136130241s +STEP: Saw pod success +Sep 7 08:04:41.263: INFO: Pod "pod-projected-secrets-dee92215-7439-4ade-9348-d5d5ecdea882" satisfied condition "Succeeded or Failed" +Sep 7 08:04:41.267: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-secrets-dee92215-7439-4ade-9348-d5d5ecdea882 container projected-secret-volume-test: +STEP: delete the pod +Sep 7 08:04:41.324: INFO: Waiting for pod pod-projected-secrets-dee92215-7439-4ade-9348-d5d5ecdea882 to disappear +Sep 7 08:04:41.332: INFO: Pod pod-projected-secrets-dee92215-7439-4ade-9348-d5d5ecdea882 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:188 +Sep 7 08:04:41.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1849" for this suite. + +• [SLOW TEST:8.281 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":356,"completed":95,"skipped":1675,"failed":0} +SSSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] HostPort + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:04:41.351: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename hostport +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] HostPort + test/e2e/network/hostport.go:49 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled +Sep 7 08:04:41.433: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:04:43.442: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.31.51.96 on the node which pod1 resides and expect scheduled +Sep 7 08:04:43.466: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:04:45.479: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.31.51.96 but use UDP protocol on the node which pod2 resides +Sep 7 08:04:45.501: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:04:47.511: INFO: The status of Pod pod3 is Running (Ready = true) +Sep 7 08:04:47.519: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:04:49.524: INFO: The status of Pod e2e-host-exec is Running (Ready = true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Sep 7 08:04:49.527: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.31.51.96 http://127.0.0.1:54323/hostname] Namespace:hostport-3383 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 08:04:49.527: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:04:49.528: INFO: ExecWithOptions: Clientset creation +Sep 7 08:04:49.528: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/hostport-3383/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.31.51.96+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.31.51.96, port: 54323 +Sep 7 08:04:49.623: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.31.51.96:54323/hostname] Namespace:hostport-3383 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 08:04:49.623: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:04:49.624: INFO: ExecWithOptions: Clientset creation +Sep 7 08:04:49.624: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/hostport-3383/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F172.31.51.96%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.31.51.96, port: 54323 UDP +Sep 7 08:04:49.719: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.31.51.96 54323] Namespace:hostport-3383 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 08:04:49.719: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:04:49.720: INFO: ExecWithOptions: Clientset creation +Sep 7 08:04:49.720: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/hostport-3383/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+172.31.51.96+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +[AfterEach] [sig-network] HostPort + test/e2e/framework/framework.go:188 +Sep 7 08:04:54.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostport-3383" for this suite. + +• [SLOW TEST:13.481 seconds] +[sig-network] HostPort +test/e2e/network/common/framework.go:23 + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":356,"completed":96,"skipped":1680,"failed":0} +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:04:54.832: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Sep 7 08:05:00.938: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:188 +Sep 7 08:05:00.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-5844" for this suite. + +• [SLOW TEST:6.141 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:43 + on terminated container + test/e2e/common/node/runtime.go:136 + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":356,"completed":97,"skipped":1680,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:05:00.974: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating secret secrets-3224/secret-test-2caf2afa-0755-4607-8811-c24dec5f51ef +STEP: Creating a pod to test consume secrets +Sep 7 08:05:01.073: INFO: Waiting up to 5m0s for pod "pod-configmaps-6cf88281-a851-44b2-982b-ceada400b64b" in namespace "secrets-3224" to be "Succeeded or Failed" +Sep 7 08:05:01.083: INFO: Pod "pod-configmaps-6cf88281-a851-44b2-982b-ceada400b64b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063407ms +Sep 7 08:05:03.095: INFO: Pod "pod-configmaps-6cf88281-a851-44b2-982b-ceada400b64b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02284675s +Sep 7 08:05:05.108: INFO: Pod "pod-configmaps-6cf88281-a851-44b2-982b-ceada400b64b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035822521s +STEP: Saw pod success +Sep 7 08:05:05.108: INFO: Pod "pod-configmaps-6cf88281-a851-44b2-982b-ceada400b64b" satisfied condition "Succeeded or Failed" +Sep 7 08:05:05.111: INFO: Trying to get logs from node 172.31.51.97 pod pod-configmaps-6cf88281-a851-44b2-982b-ceada400b64b container env-test: +STEP: delete the pod +Sep 7 08:05:05.143: INFO: Waiting for pod pod-configmaps-6cf88281-a851-44b2-982b-ceada400b64b to disappear +Sep 7 08:05:05.148: INFO: Pod pod-configmaps-6cf88281-a851-44b2-982b-ceada400b64b no longer exists +[AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:188 +Sep 7 08:05:05.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3224" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":356,"completed":98,"skipped":1716,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:05:05.158: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:48 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:05:05.232: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-ec610663-d60f-4979-8db8-08e34c864f0b" in namespace "security-context-test-7461" to be "Succeeded or Failed" +Sep 7 08:05:05.261: INFO: Pod "alpine-nnp-false-ec610663-d60f-4979-8db8-08e34c864f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.013941ms +Sep 7 08:05:07.268: INFO: Pod "alpine-nnp-false-ec610663-d60f-4979-8db8-08e34c864f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03613962s +Sep 7 08:05:09.287: INFO: Pod "alpine-nnp-false-ec610663-d60f-4979-8db8-08e34c864f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054365546s +Sep 7 08:05:11.293: INFO: Pod "alpine-nnp-false-ec610663-d60f-4979-8db8-08e34c864f0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061217678s +Sep 7 08:05:11.293: INFO: Pod "alpine-nnp-false-ec610663-d60f-4979-8db8-08e34c864f0b" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:188 +Sep 7 08:05:11.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-7461" for this suite. + +• [SLOW TEST:6.154 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + when creating containers with AllowPrivilegeEscalation + test/e2e/common/node/security_context.go:298 + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":99,"skipped":1737,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:05:11.312: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward api env vars +Sep 7 08:05:11.361: INFO: Waiting up to 5m0s for pod "downward-api-146345fe-17df-4bc3-8f4b-488c20146a1d" in namespace "downward-api-1324" to be "Succeeded or Failed" +Sep 7 08:05:11.369: INFO: Pod "downward-api-146345fe-17df-4bc3-8f4b-488c20146a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.847038ms +Sep 7 08:05:13.382: INFO: Pod "downward-api-146345fe-17df-4bc3-8f4b-488c20146a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020710862s +Sep 7 08:05:15.391: INFO: Pod "downward-api-146345fe-17df-4bc3-8f4b-488c20146a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029424214s +Sep 7 08:05:17.402: INFO: Pod "downward-api-146345fe-17df-4bc3-8f4b-488c20146a1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040883386s +STEP: Saw pod success +Sep 7 08:05:17.402: INFO: Pod "downward-api-146345fe-17df-4bc3-8f4b-488c20146a1d" satisfied condition "Succeeded or Failed" +Sep 7 08:05:17.406: INFO: Trying to get logs from node 172.31.51.96 pod downward-api-146345fe-17df-4bc3-8f4b-488c20146a1d container dapi-container: +STEP: delete the pod +Sep 7 08:05:17.426: INFO: Waiting for pod downward-api-146345fe-17df-4bc3-8f4b-488c20146a1d to disappear +Sep 7 08:05:17.430: INFO: Pod downward-api-146345fe-17df-4bc3-8f4b-488c20146a1d no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:188 +Sep 7 08:05:17.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1324" for this suite. + +• [SLOW TEST:6.127 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":356,"completed":100,"skipped":1774,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:05:17.440: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0644 on node default medium +Sep 7 08:05:17.493: INFO: Waiting up to 5m0s for pod "pod-8f97f1cd-c297-4940-8675-0b5d7f90b7a6" in namespace "emptydir-9185" to be "Succeeded or Failed" +Sep 7 08:05:17.505: INFO: Pod "pod-8f97f1cd-c297-4940-8675-0b5d7f90b7a6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.033362ms +Sep 7 08:05:19.515: INFO: Pod "pod-8f97f1cd-c297-4940-8675-0b5d7f90b7a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02205379s +Sep 7 08:05:21.522: INFO: Pod "pod-8f97f1cd-c297-4940-8675-0b5d7f90b7a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02895806s +STEP: Saw pod success +Sep 7 08:05:21.522: INFO: Pod "pod-8f97f1cd-c297-4940-8675-0b5d7f90b7a6" satisfied condition "Succeeded or Failed" +Sep 7 08:05:21.526: INFO: Trying to get logs from node 172.31.51.96 pod pod-8f97f1cd-c297-4940-8675-0b5d7f90b7a6 container test-container: +STEP: delete the pod +Sep 7 08:05:21.544: INFO: Waiting for pod pod-8f97f1cd-c297-4940-8675-0b5d7f90b7a6 to disappear +Sep 7 08:05:21.547: INFO: Pod pod-8f97f1cd-c297-4940-8675-0b5d7f90b7a6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:05:21.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9185" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":101,"skipped":1839,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:05:21.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pod +Sep 7 08:05:21.590: INFO: PodSpec: initContainers in spec.initContainers +Sep 7 08:06:07.714: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fbfea6f5-9a99-47c3-b133-cf58f797e0a2", GenerateName:"", Namespace:"init-container-1702", SelfLink:"", UID:"8b42a8c4-542b-4d29-b9fb-cb28a4ef0f36", ResourceVersion:"11443", Generation:0, CreationTimestamp:time.Date(2022, time.September, 7, 8, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"590955067"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 7, 8, 5, 21, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00318a1e0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 7, 8, 5, 23, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00318a210), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-n7f4p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0031f4420), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-n7f4p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-n7f4p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.7", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-n7f4p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0038f2678), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"172.31.51.96", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00259a310), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0038f2700)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0038f2720)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0038f2728), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0038f272c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003626110), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.September, 7, 8, 5, 21, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.September, 7, 8, 5, 21, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.September, 7, 8, 5, 21, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.September, 7, 8, 5, 21, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.31.51.96", PodIP:"172.20.75.60", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.20.75.60"}}, StartTime:time.Date(2022, time.September, 7, 8, 5, 21, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00318a258), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00259a3f0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://d176161a50c32f8c9f3c3e1cd6861d5b04af293c20b09297ad0571886e734a7e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0031f4520), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0031f44e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.7", ImageID:"", ContainerID:"", Started:(*bool)(0xc0038f27af)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:188 +Sep 7 08:06:07.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-1702" for this suite. + +• [SLOW TEST:46.177 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":356,"completed":102,"skipped":1860,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:06:07.734: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:06:08.302: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:06:11.335: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/framework/framework.go:652 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:06:11.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-832" for this suite. +STEP: Destroying namespace "webhook-832-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":356,"completed":103,"skipped":1956,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:06:11.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[BeforeEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1540 +[It] should create a pod from an image when restart is Never [Conformance] + test/e2e/framework/framework.go:652 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Sep 7 08:06:11.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6865 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2' +Sep 7 08:06:11.794: INFO: stderr: "" +Sep 7 08:06:11.794: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1544 +Sep 7 08:06:11.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6865 delete pods e2e-test-httpd-pod' +Sep 7 08:06:14.832: INFO: stderr: "" +Sep 7 08:06:14.832: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:06:14.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6865" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":356,"completed":104,"skipped":1974,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:06:14.845: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating service in namespace services-7863 +STEP: creating service affinity-nodeport in namespace services-7863 +STEP: creating replication controller affinity-nodeport in namespace services-7863 +I0907 08:06:14.988064 19 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-7863, replica count: 3 +I0907 08:06:18.042293 19 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0907 08:06:21.043120 19 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 08:06:21.052: INFO: Creating new exec pod +Sep 7 08:06:24.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-7863 exec execpod-affinityqsn7d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Sep 7 08:06:24.279: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Sep 7 08:06:24.279: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:06:24.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-7863 exec execpod-affinityqsn7d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.9.5 80' +Sep 7 08:06:24.445: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.9.5 80\nConnection to 10.68.9.5 80 port [tcp/http] succeeded!\n" +Sep 7 08:06:24.445: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:06:24.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-7863 exec execpod-affinityqsn7d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.96 32426' +Sep 7 08:06:24.644: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.51.96 32426\nConnection to 172.31.51.96 32426 port [tcp/*] succeeded!\n" +Sep 7 08:06:24.644: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:06:24.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-7863 exec execpod-affinityqsn7d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.97 32426' +Sep 7 08:06:24.829: INFO: stderr: "+ + echo hostName\nnc -v -t -w 2 172.31.51.97 32426\nConnection to 172.31.51.97 32426 port [tcp/*] succeeded!\n" +Sep 7 08:06:24.830: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:06:24.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-7863 exec execpod-affinityqsn7d -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.31.51.96:32426/ ; done' +Sep 7 08:06:25.201: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:32426/\n" +Sep 7 08:06:25.201: INFO: stdout: "\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc\naffinity-nodeport-z54nc" +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Received response from host: affinity-nodeport-z54nc +Sep 7 08:06:25.201: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-7863, will wait for the garbage collector to delete the pods +Sep 7 08:06:25.288: INFO: Deleting ReplicationController affinity-nodeport took: 5.03708ms +Sep 7 08:06:25.390: INFO: Terminating ReplicationController affinity-nodeport pods took: 101.95518ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:06:29.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7863" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:14.235 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":356,"completed":105,"skipped":1992,"failed":0} +SSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:06:29.080: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + test/e2e/framework/framework.go:652 +STEP: reading a file in the container +Sep 7 08:06:31.149: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9997 pod-service-account-c45726fc-5494-4cf3-8c45-777870459174 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Sep 7 08:06:31.325: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9997 pod-service-account-c45726fc-5494-4cf3-8c45-777870459174 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Sep 7 08:06:31.503: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9997 pod-service-account-c45726fc-5494-4cf3-8c45-777870459174 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +Sep 7 08:06:31.668: INFO: Got root ca configmap in namespace "svcaccounts-9997" +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:188 +Sep 7 08:06:31.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-9997" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":356,"completed":106,"skipped":1995,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Containers + should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:06:31.690: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test override arguments +Sep 7 08:06:31.750: INFO: Waiting up to 5m0s for pod "client-containers-5c25dcb5-a70b-42f1-a705-7a1b45058b8c" in namespace "containers-2319" to be "Succeeded or Failed" +Sep 7 08:06:31.764: INFO: Pod "client-containers-5c25dcb5-a70b-42f1-a705-7a1b45058b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.039839ms +Sep 7 08:06:33.779: INFO: Pod "client-containers-5c25dcb5-a70b-42f1-a705-7a1b45058b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02850755s +Sep 7 08:06:35.795: INFO: Pod "client-containers-5c25dcb5-a70b-42f1-a705-7a1b45058b8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044591682s +STEP: Saw pod success +Sep 7 08:06:35.795: INFO: Pod "client-containers-5c25dcb5-a70b-42f1-a705-7a1b45058b8c" satisfied condition "Succeeded or Failed" +Sep 7 08:06:35.797: INFO: Trying to get logs from node 172.31.51.96 pod client-containers-5c25dcb5-a70b-42f1-a705-7a1b45058b8c container agnhost-container: +STEP: delete the pod +Sep 7 08:06:35.848: INFO: Waiting for pod client-containers-5c25dcb5-a70b-42f1-a705-7a1b45058b8c to disappear +Sep 7 08:06:35.853: INFO: Pod client-containers-5c25dcb5-a70b-42f1-a705-7a1b45058b8c no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:188 +Sep 7 08:06:35.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-2319" for this suite. +•{"msg":"PASSED [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]","total":356,"completed":107,"skipped":2035,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:06:35.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 +[It] should release no longer matching pods [Conformance] + test/e2e/framework/framework.go:652 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Sep 7 08:06:35.907: INFO: Pod name pod-release: Found 0 pods out of 1 +Sep 7 08:06:40.922: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:188 +Sep 7 08:06:40.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-4467" for this suite. + +• [SLOW TEST:5.167 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":356,"completed":108,"skipped":2043,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Lease + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:06:41.028: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename lease-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] lease API should be available [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-node] Lease + test/e2e/framework/framework.go:188 +Sep 7 08:06:41.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-6073" for this suite. +•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":356,"completed":109,"skipped":2071,"failed":0} +SSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:06:41.271: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:06:41.942: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +Sep 7 08:06:43.993: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 6, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 6, 41, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 6, 41, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 6, 41, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:06:47.024: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:06:47.042: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-393-crds.webhook.example.com via the AdmissionRegistration API +Sep 7 08:06:47.605: INFO: Waiting for webhook configuration to be ready... +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:06:50.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1554" for this suite. +STEP: Destroying namespace "webhook-1554-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:9.364 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource with pruning [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":356,"completed":110,"skipped":2074,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:06:50.635: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name projected-configmap-test-volume-map-100306a0-7ed1-4aea-8553-00dcd8d08e33 +STEP: Creating a pod to test consume configMaps +Sep 7 08:06:50.775: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-62a090be-0db6-421d-b48f-dce891e9c57f" in namespace "projected-7649" to be "Succeeded or Failed" +Sep 7 08:06:50.794: INFO: Pod "pod-projected-configmaps-62a090be-0db6-421d-b48f-dce891e9c57f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.959623ms +Sep 7 08:06:52.807: INFO: Pod "pod-projected-configmaps-62a090be-0db6-421d-b48f-dce891e9c57f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032030058s +Sep 7 08:06:54.818: INFO: Pod "pod-projected-configmaps-62a090be-0db6-421d-b48f-dce891e9c57f": Phase="Running", Reason="", readiness=false. Elapsed: 4.042305621s +Sep 7 08:06:56.824: INFO: Pod "pod-projected-configmaps-62a090be-0db6-421d-b48f-dce891e9c57f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048593293s +STEP: Saw pod success +Sep 7 08:06:56.824: INFO: Pod "pod-projected-configmaps-62a090be-0db6-421d-b48f-dce891e9c57f" satisfied condition "Succeeded or Failed" +Sep 7 08:06:56.826: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-configmaps-62a090be-0db6-421d-b48f-dce891e9c57f container agnhost-container: +STEP: delete the pod +Sep 7 08:06:56.841: INFO: Waiting for pod pod-projected-configmaps-62a090be-0db6-421d-b48f-dce891e9c57f to disappear +Sep 7 08:06:56.843: INFO: Pod pod-projected-configmaps-62a090be-0db6-421d-b48f-dce891e9c57f no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:188 +Sep 7 08:06:56.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7649" for this suite. + +• [SLOW TEST:6.215 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":356,"completed":111,"skipped":2092,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:06:56.851: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating projection with secret that has name projected-secret-test-map-fc2d0522-7258-4b1f-bf79-81e224578a7c +STEP: Creating a pod to test consume secrets +Sep 7 08:06:56.893: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f1ee7b1b-ba84-44e3-9df3-a3319c7e9bde" in namespace "projected-5457" to be "Succeeded or Failed" +Sep 7 08:06:56.895: INFO: Pod "pod-projected-secrets-f1ee7b1b-ba84-44e3-9df3-a3319c7e9bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137573ms +Sep 7 08:06:58.908: INFO: Pod "pod-projected-secrets-f1ee7b1b-ba84-44e3-9df3-a3319c7e9bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014746761s +Sep 7 08:07:00.918: INFO: Pod "pod-projected-secrets-f1ee7b1b-ba84-44e3-9df3-a3319c7e9bde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024702809s +STEP: Saw pod success +Sep 7 08:07:00.918: INFO: Pod "pod-projected-secrets-f1ee7b1b-ba84-44e3-9df3-a3319c7e9bde" satisfied condition "Succeeded or Failed" +Sep 7 08:07:00.920: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-secrets-f1ee7b1b-ba84-44e3-9df3-a3319c7e9bde container projected-secret-volume-test: +STEP: delete the pod +Sep 7 08:07:00.937: INFO: Waiting for pod pod-projected-secrets-f1ee7b1b-ba84-44e3-9df3-a3319c7e9bde to disappear +Sep 7 08:07:00.940: INFO: Pod pod-projected-secrets-f1ee7b1b-ba84-44e3-9df3-a3319c7e9bde no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:188 +Sep 7 08:07:00.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5457" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":112,"skipped":2113,"failed":0} +S +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:07:00.947: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Performing setup for networking test in namespace pod-network-test-7375 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Sep 7 08:07:01.001: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Sep 7 08:07:01.046: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:07:03.055: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:07:05.054: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:07:07.081: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:07:09.058: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:07:11.049: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:07:13.055: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:07:15.079: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:07:17.063: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:07:19.052: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:07:21.049: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:07:23.054: INFO: The status of Pod netserver-0 is Running (Ready = true) +Sep 7 08:07:23.059: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Sep 7 08:07:25.088: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Sep 7 08:07:25.088: INFO: Breadth first check of 172.20.75.5 on host 172.31.51.96... +Sep 7 08:07:25.095: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.75.21:9080/dial?request=hostname&protocol=http&host=172.20.75.5&port=8083&tries=1'] Namespace:pod-network-test-7375 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 08:07:25.095: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:07:25.096: INFO: ExecWithOptions: Clientset creation +Sep 7 08:07:25.096: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/pod-network-test-7375/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.20.75.21%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.20.75.5%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Sep 7 08:07:25.179: INFO: Waiting for responses: map[] +Sep 7 08:07:25.179: INFO: reached 172.20.75.5 after 0/1 tries +Sep 7 08:07:25.179: INFO: Breadth first check of 172.20.97.108 on host 172.31.51.97... +Sep 7 08:07:25.182: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.75.21:9080/dial?request=hostname&protocol=http&host=172.20.97.108&port=8083&tries=1'] Namespace:pod-network-test-7375 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 08:07:25.182: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:07:25.183: INFO: ExecWithOptions: Clientset creation +Sep 7 08:07:25.183: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/pod-network-test-7375/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F172.20.75.21%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D172.20.97.108%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Sep 7 08:07:25.253: INFO: Waiting for responses: map[] +Sep 7 08:07:25.253: INFO: reached 172.20.97.108 after 0/1 tries +Sep 7 08:07:25.253: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:188 +Sep 7 08:07:25.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-7375" for this suite. + +• [SLOW TEST:24.334 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":356,"completed":113,"skipped":2114,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:07:25.281: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 +STEP: create the container to handle the HTTPGet hook request. +Sep 7 08:07:25.341: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:07:27.353: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the pod with lifecycle hook +Sep 7 08:07:27.371: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:07:29.383: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Sep 7 08:07:29.404: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Sep 7 08:07:29.407: INFO: Pod pod-with-poststart-http-hook still exists +Sep 7 08:07:31.410: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Sep 7 08:07:31.438: INFO: Pod pod-with-poststart-http-hook still exists +Sep 7 08:07:33.411: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Sep 7 08:07:33.419: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:188 +Sep 7 08:07:33.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-7806" for this suite. + +• [SLOW TEST:8.149 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":356,"completed":114,"skipped":2128,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:07:33.431: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Sep 7 08:07:33.476: INFO: Waiting up to 5m0s for pod "pod-88ebb234-a67c-4ec2-b529-4074a26b92c6" in namespace "emptydir-8473" to be "Succeeded or Failed" +Sep 7 08:07:33.485: INFO: Pod "pod-88ebb234-a67c-4ec2-b529-4074a26b92c6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.500547ms +Sep 7 08:07:35.498: INFO: Pod "pod-88ebb234-a67c-4ec2-b529-4074a26b92c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022498286s +Sep 7 08:07:37.509: INFO: Pod "pod-88ebb234-a67c-4ec2-b529-4074a26b92c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033062265s +STEP: Saw pod success +Sep 7 08:07:37.509: INFO: Pod "pod-88ebb234-a67c-4ec2-b529-4074a26b92c6" satisfied condition "Succeeded or Failed" +Sep 7 08:07:37.512: INFO: Trying to get logs from node 172.31.51.96 pod pod-88ebb234-a67c-4ec2-b529-4074a26b92c6 container test-container: +STEP: delete the pod +Sep 7 08:07:37.530: INFO: Waiting for pod pod-88ebb234-a67c-4ec2-b529-4074a26b92c6 to disappear +Sep 7 08:07:37.532: INFO: Pod pod-88ebb234-a67c-4ec2-b529-4074a26b92c6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:07:37.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8473" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":115,"skipped":2130,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:07:37.546: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:07:37.614: INFO: created pod pod-service-account-defaultsa +Sep 7 08:07:37.614: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Sep 7 08:07:37.632: INFO: created pod pod-service-account-mountsa +Sep 7 08:07:37.632: INFO: pod pod-service-account-mountsa service account token volume mount: true +Sep 7 08:07:37.658: INFO: created pod pod-service-account-nomountsa +Sep 7 08:07:37.658: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Sep 7 08:07:37.672: INFO: created pod pod-service-account-defaultsa-mountspec +Sep 7 08:07:37.672: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Sep 7 08:07:37.725: INFO: created pod pod-service-account-mountsa-mountspec +Sep 7 08:07:37.725: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Sep 7 08:07:37.762: INFO: created pod pod-service-account-nomountsa-mountspec +Sep 7 08:07:37.762: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Sep 7 08:07:37.803: INFO: created pod pod-service-account-defaultsa-nomountspec +Sep 7 08:07:37.803: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Sep 7 08:07:37.836: INFO: created pod pod-service-account-mountsa-nomountspec +Sep 7 08:07:37.836: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Sep 7 08:07:37.924: INFO: created pod pod-service-account-nomountsa-nomountspec +Sep 7 08:07:37.924: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:188 +Sep 7 08:07:37.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-6810" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":356,"completed":116,"skipped":2175,"failed":0} +S +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:07:38.079: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +Sep 7 08:07:39.255: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:188 +Sep 7 08:07:39.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W0907 08:07:39.255746 19 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-766" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":356,"completed":117,"skipped":2176,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:07:39.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:40 +[It] should print the output to logs [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:07:39.401: INFO: The status of Pod busybox-scheduling-d7169980-9c83-4e8e-80a0-ab34fe713a2f is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:07:41.431: INFO: The status of Pod busybox-scheduling-d7169980-9c83-4e8e-80a0-ab34fe713a2f is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:188 +Sep 7 08:07:41.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-1077" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":356,"completed":118,"skipped":2216,"failed":0} +S +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:07:41.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data +[It] should support subpaths with configmap pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod pod-subpath-test-configmap-rn6t +STEP: Creating a pod to test atomic-volume-subpath +Sep 7 08:07:41.771: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rn6t" in namespace "subpath-3162" to be "Succeeded or Failed" +Sep 7 08:07:41.778: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Pending", Reason="", readiness=false. Elapsed: 7.056429ms +Sep 7 08:07:43.823: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052515192s +Sep 7 08:07:45.845: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 4.074528067s +Sep 7 08:07:47.856: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 6.085537258s +Sep 7 08:07:49.861: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 8.089800883s +Sep 7 08:07:51.869: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 10.097869007s +Sep 7 08:07:53.876: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 12.105376315s +Sep 7 08:07:55.910: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 14.139529664s +Sep 7 08:07:57.925: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 16.154021311s +Sep 7 08:07:59.932: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 18.161468609s +Sep 7 08:08:01.942: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 20.170876916s +Sep 7 08:08:03.955: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 22.184339344s +Sep 7 08:08:05.969: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=true. Elapsed: 24.197964736s +Sep 7 08:08:07.981: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Running", Reason="", readiness=false. Elapsed: 26.21020109s +Sep 7 08:08:09.989: INFO: Pod "pod-subpath-test-configmap-rn6t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.218464011s +STEP: Saw pod success +Sep 7 08:08:09.989: INFO: Pod "pod-subpath-test-configmap-rn6t" satisfied condition "Succeeded or Failed" +Sep 7 08:08:09.993: INFO: Trying to get logs from node 172.31.51.96 pod pod-subpath-test-configmap-rn6t container test-container-subpath-configmap-rn6t: +STEP: delete the pod +Sep 7 08:08:10.012: INFO: Waiting for pod pod-subpath-test-configmap-rn6t to disappear +Sep 7 08:08:10.016: INFO: Pod pod-subpath-test-configmap-rn6t no longer exists +STEP: Deleting pod pod-subpath-test-configmap-rn6t +Sep 7 08:08:10.016: INFO: Deleting pod "pod-subpath-test-configmap-rn6t" in namespace "subpath-3162" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:188 +Sep 7 08:08:10.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-3162" for this suite. + +• [SLOW TEST:28.552 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with configmap pod [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]","total":356,"completed":119,"skipped":2217,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:08:10.029: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-1408 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a new StatefulSet +Sep 7 08:08:10.090: INFO: Found 0 stateful pods, waiting for 3 +Sep 7 08:08:20.097: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 08:08:20.097: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 08:08:20.097: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 08:08:20.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-1408 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Sep 7 08:08:20.284: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Sep 7 08:08:20.284: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Sep 7 08:08:20.284: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 +Sep 7 08:08:30.346: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Sep 7 08:08:40.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-1408 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Sep 7 08:08:40.549: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Sep 7 08:08:40.549: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Sep 7 08:08:40.549: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Sep 7 08:08:50.607: INFO: Waiting for StatefulSet statefulset-1408/ss2 to complete update +Sep 7 08:08:50.607: INFO: Waiting for Pod statefulset-1408/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +Sep 7 08:08:50.607: INFO: Waiting for Pod statefulset-1408/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +Sep 7 08:09:00.620: INFO: Waiting for StatefulSet statefulset-1408/ss2 to complete update +Sep 7 08:09:00.620: INFO: Waiting for Pod statefulset-1408/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +STEP: Rolling back to a previous revision +Sep 7 08:09:10.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-1408 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Sep 7 08:09:10.788: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Sep 7 08:09:10.788: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Sep 7 08:09:10.788: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Sep 7 08:09:20.830: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Sep 7 08:09:30.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-1408 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Sep 7 08:09:31.048: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Sep 7 08:09:31.048: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Sep 7 08:09:31.048: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Sep 7 08:09:41.066: INFO: Deleting all statefulset in ns statefulset-1408 +Sep 7 08:09:41.069: INFO: Scaling statefulset ss2 to 0 +Sep 7 08:09:51.099: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 08:09:51.103: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:188 +Sep 7 08:09:51.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-1408" for this suite. + +• [SLOW TEST:101.139 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":356,"completed":120,"skipped":2229,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:09:51.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:188 +Sep 7 08:09:51.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4697" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":356,"completed":121,"skipped":2240,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:09:51.236: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/framework/framework.go:652 +STEP: set up a multi version CRD +Sep 7 08:09:51.269: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: mark a version not serverd +STEP: check the unserved version gets removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:10:14.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-2525" for this suite. + +• [SLOW TEST:22.888 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":356,"completed":122,"skipped":2251,"failed":0} +[sig-node] RuntimeClass + should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:14.124: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename runtimeclass +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:188 +Sep 7 08:10:14.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-6447" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]","total":356,"completed":123,"skipped":2251,"failed":0} +SSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:14.243: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[BeforeEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1574 +[It] should update a single-container pod's image [Conformance] + test/e2e/framework/framework.go:652 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Sep 7 08:10:14.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-340 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Sep 7 08:10:14.418: INFO: stderr: "" +Sep 7 08:10:14.418: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running +STEP: verifying the pod e2e-test-httpd-pod was created +Sep 7 08:10:19.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-340 get pod e2e-test-httpd-pod -o json' +Sep 7 08:10:19.630: INFO: stderr: "" +Sep 7 08:10:19.630: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2022-09-07T08:10:14Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-340\",\n \"resourceVersion\": \"13360\",\n \"uid\": \"e40bea25-4848-47b8-b8e2-ecb16b1df9ab\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-pht7v\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"172.31.51.96\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-pht7v\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-09-07T08:10:14Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-09-07T08:10:15Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-09-07T08:10:15Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-09-07T08:10:14Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://5764ce84bc181cc745f1d94b0bb86fa373050c65f5ef305a109717df2977f66a\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-09-07T08:10:15Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.31.51.96\",\n \"phase\": \"Running\",\n \"podIP\": \"172.20.75.27\",\n \"podIPs\": [\n {\n \"ip\": \"172.20.75.27\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-09-07T08:10:14Z\"\n }\n}\n" +STEP: replace the image in the pod +Sep 7 08:10:19.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-340 replace -f -' +Sep 7 08:10:20.956: INFO: stderr: "" +Sep 7 08:10:20.956: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 +[AfterEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1578 +Sep 7 08:10:20.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-340 delete pods e2e-test-httpd-pod' +Sep 7 08:10:22.844: INFO: stderr: "" +Sep 7 08:10:22.844: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:10:22.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-340" for this suite. + +• [SLOW TEST:8.617 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl replace + test/e2e/kubectl/kubectl.go:1571 + should update a single-container pod's image [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":356,"completed":124,"skipped":2260,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:22.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Sep 7 08:10:22.924: INFO: Waiting up to 5m0s for pod "pod-0ffa05f6-e02c-4fc4-8ac2-fbb36a912e0f" in namespace "emptydir-7434" to be "Succeeded or Failed" +Sep 7 08:10:22.950: INFO: Pod "pod-0ffa05f6-e02c-4fc4-8ac2-fbb36a912e0f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.237121ms +Sep 7 08:10:24.974: INFO: Pod "pod-0ffa05f6-e02c-4fc4-8ac2-fbb36a912e0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050294813s +Sep 7 08:10:26.986: INFO: Pod "pod-0ffa05f6-e02c-4fc4-8ac2-fbb36a912e0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062300179s +Sep 7 08:10:28.993: INFO: Pod "pod-0ffa05f6-e02c-4fc4-8ac2-fbb36a912e0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068855416s +STEP: Saw pod success +Sep 7 08:10:28.993: INFO: Pod "pod-0ffa05f6-e02c-4fc4-8ac2-fbb36a912e0f" satisfied condition "Succeeded or Failed" +Sep 7 08:10:28.996: INFO: Trying to get logs from node 172.31.51.96 pod pod-0ffa05f6-e02c-4fc4-8ac2-fbb36a912e0f container test-container: +STEP: delete the pod +Sep 7 08:10:29.029: INFO: Waiting for pod pod-0ffa05f6-e02c-4fc4-8ac2-fbb36a912e0f to disappear +Sep 7 08:10:29.033: INFO: Pod pod-0ffa05f6-e02c-4fc4-8ac2-fbb36a912e0f no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:10:29.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7434" for this suite. + +• [SLOW TEST:6.182 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":125,"skipped":2261,"failed":0} +S +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:29.042: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:191 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a Pod with a static label +STEP: watching for Pod to be ready +Sep 7 08:10:29.095: INFO: observed Pod pod-test in namespace pods-7012 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Sep 7 08:10:29.096: INFO: observed Pod pod-test in namespace pods-7012 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:10:29 +0000 UTC }] +Sep 7 08:10:29.116: INFO: observed Pod pod-test in namespace pods-7012 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:10:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:10:29 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:10:29 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:10:29 +0000 UTC }] +Sep 7 08:10:30.841: INFO: Found Pod pod-test in namespace pods-7012 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:10:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:10:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:10:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:10:29 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data +STEP: getting the Pod and ensuring that it's patched +STEP: replacing the Pod's status Ready condition to False +STEP: check the Pod again to ensure its Ready conditions are False +STEP: deleting the Pod via a Collection with a LabelSelector +STEP: watching for the Pod to be deleted +Sep 7 08:10:30.903: INFO: observed event type MODIFIED +Sep 7 08:10:32.870: INFO: observed event type MODIFIED +Sep 7 08:10:33.855: INFO: observed event type MODIFIED +Sep 7 08:10:33.865: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:188 +Sep 7 08:10:33.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-7012" for this suite. +•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":356,"completed":126,"skipped":2262,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:33.920: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should complete a service status lifecycle [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a Service +STEP: watching for the Service to be added +Sep 7 08:10:33.996: INFO: Found Service test-service-m94w4 in namespace services-2212 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Sep 7 08:10:33.996: INFO: Service test-service-m94w4 created +STEP: Getting /status +Sep 7 08:10:34.000: INFO: Service test-service-m94w4 has LoadBalancer: {[]} +STEP: patching the ServiceStatus +STEP: watching for the Service to be patched +Sep 7 08:10:34.009: INFO: observed Service test-service-m94w4 in namespace services-2212 with annotations: map[] & LoadBalancer: {[]} +Sep 7 08:10:34.009: INFO: Found Service test-service-m94w4 in namespace services-2212 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Sep 7 08:10:34.009: INFO: Service test-service-m94w4 has service status patched +STEP: updating the ServiceStatus +Sep 7 08:10:34.017: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated +Sep 7 08:10:34.020: INFO: Observed Service test-service-m94w4 in namespace services-2212 with annotations: map[] & Conditions: {[]} +Sep 7 08:10:34.020: INFO: Observed event: &Service{ObjectMeta:{test-service-m94w4 services-2212 54e97974-636c-4486-b47a-31f4f69a5481 13489 0 2022-09-07 08:10:33 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-09-07 08:10:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2022-09-07 08:10:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.68.110.150,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.68.110.150],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Sep 7 08:10:34.020: INFO: Found Service test-service-m94w4 in namespace services-2212 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Sep 7 08:10:34.020: INFO: Service test-service-m94w4 has service status updated +STEP: patching the service +STEP: watching for the Service to be patched +Sep 7 08:10:34.057: INFO: observed Service test-service-m94w4 in namespace services-2212 with labels: map[test-service-static:true] +Sep 7 08:10:34.057: INFO: observed Service test-service-m94w4 in namespace services-2212 with labels: map[test-service-static:true] +Sep 7 08:10:34.057: INFO: observed Service test-service-m94w4 in namespace services-2212 with labels: map[test-service-static:true] +Sep 7 08:10:34.057: INFO: Found Service test-service-m94w4 in namespace services-2212 with labels: map[test-service:patched test-service-static:true] +Sep 7 08:10:34.057: INFO: Service test-service-m94w4 patched +STEP: deleting the service +STEP: watching for the Service to be deleted +Sep 7 08:10:34.117: INFO: Observed event: ADDED +Sep 7 08:10:34.117: INFO: Observed event: MODIFIED +Sep 7 08:10:34.117: INFO: Observed event: MODIFIED +Sep 7 08:10:34.117: INFO: Observed event: MODIFIED +Sep 7 08:10:34.117: INFO: Found Service test-service-m94w4 in namespace services-2212 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Sep 7 08:10:34.117: INFO: Service test-service-m94w4 deleted +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:10:34.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2212" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":356,"completed":127,"skipped":2271,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:34.147: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:10:36.243: INFO: Deleting pod "var-expansion-fda389ae-ec73-4082-947c-bc67fe1a7d65" in namespace "var-expansion-4527" +Sep 7 08:10:36.250: INFO: Wait up to 5m0s for pod "var-expansion-fda389ae-ec73-4082-947c-bc67fe1a7d65" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:188 +Sep 7 08:10:38.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4527" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":356,"completed":128,"skipped":2300,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:38.277: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:10:38.324: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Sep 7 08:10:40.397: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:188 +Sep 7 08:10:41.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-1638" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":356,"completed":129,"skipped":2341,"failed":0} +SSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:41.420: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Creating a NodePort Service +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota +STEP: Ensuring resource quota status captures service creation +STEP: Deleting Services +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:188 +Sep 7 08:10:52.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4562" for this suite. + +• [SLOW TEST:11.295 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":356,"completed":130,"skipped":2344,"failed":0} +SSS +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:52.715: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating projection with secret that has name secret-emptykey-test-ef9e5922-701b-4933-9b5b-b11226be4a6d +[AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:188 +Sep 7 08:10:52.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6522" for this suite. +•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":356,"completed":131,"skipped":2347,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:52.757: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should mount projected service account token [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test service account token: +Sep 7 08:10:52.795: INFO: Waiting up to 5m0s for pod "test-pod-56245e66-615c-4c00-802a-c94b7c7beb24" in namespace "svcaccounts-8794" to be "Succeeded or Failed" +Sep 7 08:10:52.803: INFO: Pod "test-pod-56245e66-615c-4c00-802a-c94b7c7beb24": Phase="Pending", Reason="", readiness=false. Elapsed: 7.995719ms +Sep 7 08:10:54.818: INFO: Pod "test-pod-56245e66-615c-4c00-802a-c94b7c7beb24": Phase="Running", Reason="", readiness=true. Elapsed: 2.02236774s +Sep 7 08:10:56.827: INFO: Pod "test-pod-56245e66-615c-4c00-802a-c94b7c7beb24": Phase="Running", Reason="", readiness=false. Elapsed: 4.031675066s +Sep 7 08:10:58.840: INFO: Pod "test-pod-56245e66-615c-4c00-802a-c94b7c7beb24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044882105s +STEP: Saw pod success +Sep 7 08:10:58.840: INFO: Pod "test-pod-56245e66-615c-4c00-802a-c94b7c7beb24" satisfied condition "Succeeded or Failed" +Sep 7 08:10:58.845: INFO: Trying to get logs from node 172.31.51.96 pod test-pod-56245e66-615c-4c00-802a-c94b7c7beb24 container agnhost-container: +STEP: delete the pod +Sep 7 08:10:58.868: INFO: Waiting for pod test-pod-56245e66-615c-4c00-802a-c94b7c7beb24 to disappear +Sep 7 08:10:58.871: INFO: Pod test-pod-56245e66-615c-4c00-802a-c94b7c7beb24 no longer exists +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:188 +Sep 7 08:10:58.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-8794" for this suite. + +• [SLOW TEST:6.124 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should mount projected service account token [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":356,"completed":132,"skipped":2390,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:10:58.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should serve multiport endpoints from pods [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating service multi-endpoint-test in namespace services-17 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-17 to expose endpoints map[] +Sep 7 08:10:58.940: INFO: successfully validated that service multi-endpoint-test in namespace services-17 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-17 +Sep 7 08:10:58.965: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:11:01.000: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:11:02.979: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-17 to expose endpoints map[pod1:[100]] +Sep 7 08:11:02.988: INFO: successfully validated that service multi-endpoint-test in namespace services-17 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-17 +Sep 7 08:11:03.018: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:11:05.031: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-17 to expose endpoints map[pod1:[100] pod2:[101]] +Sep 7 08:11:05.049: INFO: successfully validated that service multi-endpoint-test in namespace services-17 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods +Sep 7 08:11:05.049: INFO: Creating new exec pod +Sep 7 08:11:08.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-17 exec execpodj9z6p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' +Sep 7 08:11:08.304: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 80\n+ echo hostName\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Sep 7 08:11:08.304: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:11:08.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-17 exec execpodj9z6p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.249.154 80' +Sep 7 08:11:08.508: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.249.154 80\nConnection to 10.68.249.154 80 port [tcp/http] succeeded!\n" +Sep 7 08:11:08.508: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:11:08.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-17 exec execpodj9z6p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' +Sep 7 08:11:08.690: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Sep 7 08:11:08.690: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:11:08.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-17 exec execpodj9z6p -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.249.154 81' +Sep 7 08:11:08.886: INFO: stderr: "+ nc -v -t -w 2 10.68.249.154 81\nConnection to 10.68.249.154 81 port [tcp/*] succeeded!\n+ echo hostName\n" +Sep 7 08:11:08.886: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-17 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-17 to expose endpoints map[pod2:[101]] +Sep 7 08:11:08.968: INFO: successfully validated that service multi-endpoint-test in namespace services-17 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-17 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-17 to expose endpoints map[] +Sep 7 08:11:09.046: INFO: successfully validated that service multi-endpoint-test in namespace services-17 exposes endpoints map[] +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:11:09.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-17" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:10.282 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should serve multiport endpoints from pods [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":356,"completed":133,"skipped":2462,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:11:09.165: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:11:10.298: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +Sep 7 08:11:12.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 11, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 11, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 11, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 11, 10, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:11:15.398: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + test/e2e/framework/framework.go:652 +STEP: Registering the crd webhook via the AdmissionRegistration API +STEP: Creating a custom resource definition that should be denied by the webhook +Sep 7 08:11:15.465: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:11:15.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1639" for this suite. +STEP: Destroying namespace "webhook-1639-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:6.599 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should deny crd creation [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":356,"completed":134,"skipped":2484,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:11:15.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:11:16.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties +Sep 7 08:11:21.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-7431 --namespace=crd-publish-openapi-7431 create -f -' +Sep 7 08:11:22.805: INFO: stderr: "" +Sep 7 08:11:22.805: INFO: stdout: "e2e-test-crd-publish-openapi-9206-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Sep 7 08:11:22.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-7431 --namespace=crd-publish-openapi-7431 delete e2e-test-crd-publish-openapi-9206-crds test-cr' +Sep 7 08:11:22.916: INFO: stderr: "" +Sep 7 08:11:22.916: INFO: stdout: "e2e-test-crd-publish-openapi-9206-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Sep 7 08:11:22.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-7431 --namespace=crd-publish-openapi-7431 apply -f -' +Sep 7 08:11:23.228: INFO: stderr: "" +Sep 7 08:11:23.228: INFO: stdout: "e2e-test-crd-publish-openapi-9206-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Sep 7 08:11:23.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-7431 --namespace=crd-publish-openapi-7431 delete e2e-test-crd-publish-openapi-9206-crds test-cr' +Sep 7 08:11:23.343: INFO: stderr: "" +Sep 7 08:11:23.343: INFO: stdout: "e2e-test-crd-publish-openapi-9206-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Sep 7 08:11:23.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=crd-publish-openapi-7431 explain e2e-test-crd-publish-openapi-9206-crds' +Sep 7 08:11:23.590: INFO: stderr: "" +Sep 7 08:11:23.590: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9206-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:11:26.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7431" for this suite. + +• [SLOW TEST:10.972 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":356,"completed":135,"skipped":2512,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:11:26.736: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 +Sep 7 08:11:26.788: INFO: Waiting up to 1m0s for all nodes to be ready +Sep 7 08:12:26.813: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: Create pods that use 4/5 of node resources. +Sep 7 08:12:26.840: INFO: Created pod: pod0-0-sched-preemption-low-priority +Sep 7 08:12:26.850: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Sep 7 08:12:26.899: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Sep 7 08:12:26.905: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a critical pod that use same resources as that of a lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:12:43.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-7872" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 + +• [SLOW TEST:76.334 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + validates lower priority pod preemption by critical pod [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":356,"completed":136,"skipped":2533,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:12:43.071: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 +Sep 7 08:12:43.119: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Sep 7 08:12:43.126: INFO: Waiting for terminating namespaces to be deleted... +Sep 7 08:12:43.129: INFO: +Logging pods the apiserver thinks is on node 172.31.51.96 before test +Sep 7 08:12:43.135: INFO: calico-node-g8tpr from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.135: INFO: Container calico-node ready: true, restart count 0 +Sep 7 08:12:43.135: INFO: node-local-dns-8rwpt from kube-system started at 2022-09-07 07:27:42 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.135: INFO: Container node-cache ready: true, restart count 0 +Sep 7 08:12:43.135: INFO: pod0-1-sched-preemption-medium-priority from sched-preemption-7872 started at 2022-09-07 08:12:34 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.135: INFO: Container pod0-1-sched-preemption-medium-priority ready: true, restart count 0 +Sep 7 08:12:43.135: INFO: sonobuoy from sonobuoy started at 2022-09-07 07:39:19 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.135: INFO: Container kube-sonobuoy ready: true, restart count 0 +Sep 7 08:12:43.135: INFO: sonobuoy-e2e-job-2f855b96e04a42ee from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 08:12:43.135: INFO: Container e2e ready: true, restart count 0 +Sep 7 08:12:43.135: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 08:12:43.135: INFO: sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-kstch from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 08:12:43.135: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 08:12:43.135: INFO: Container systemd-logs ready: true, restart count 0 +Sep 7 08:12:43.135: INFO: +Logging pods the apiserver thinks is on node 172.31.51.97 before test +Sep 7 08:12:43.141: INFO: calico-kube-controllers-5c8bb696bb-tvl2c from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.141: INFO: Container calico-kube-controllers ready: true, restart count 0 +Sep 7 08:12:43.141: INFO: calico-node-d87kb from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.141: INFO: Container calico-node ready: true, restart count 0 +Sep 7 08:12:43.141: INFO: coredns-84b58f6b4-xcj7z from kube-system started at 2022-09-07 07:27:41 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.141: INFO: Container coredns ready: true, restart count 0 +Sep 7 08:12:43.141: INFO: dashboard-metrics-scraper-864d79d497-bchwd from kube-system started at 2022-09-07 07:27:46 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.141: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Sep 7 08:12:43.141: INFO: kubernetes-dashboard-5fc74cf5c6-bsp7p from kube-system started at 2022-09-07 07:27:46 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.141: INFO: Container kubernetes-dashboard ready: true, restart count 0 +Sep 7 08:12:43.141: INFO: metrics-server-69797698d4-hndhm from kube-system started at 2022-09-07 07:27:43 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.141: INFO: Container metrics-server ready: true, restart count 0 +Sep 7 08:12:43.141: INFO: node-local-dns-28994 from kube-system started at 2022-09-07 07:27:42 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.141: INFO: Container node-cache ready: true, restart count 0 +Sep 7 08:12:43.141: INFO: pod1-0-sched-preemption-medium-priority from sched-preemption-7872 started at 2022-09-07 08:12:28 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.141: INFO: Container pod1-0-sched-preemption-medium-priority ready: true, restart count 0 +Sep 7 08:12:43.141: INFO: pod1-1-sched-preemption-medium-priority from sched-preemption-7872 started at 2022-09-07 08:12:28 +0000 UTC (1 container statuses recorded) +Sep 7 08:12:43.141: INFO: Container pod1-1-sched-preemption-medium-priority ready: true, restart count 0 +Sep 7 08:12:43.141: INFO: sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-svvzn from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 08:12:43.141: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 08:12:43.141: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + test/e2e/framework/framework.go:652 +STEP: verifying the node has the label node 172.31.51.96 +STEP: verifying the node has the label node 172.31.51.97 +Sep 7 08:12:43.206: INFO: Pod calico-kube-controllers-5c8bb696bb-tvl2c requesting resource cpu=0m on Node 172.31.51.97 +Sep 7 08:12:43.207: INFO: Pod calico-node-d87kb requesting resource cpu=250m on Node 172.31.51.97 +Sep 7 08:12:43.207: INFO: Pod calico-node-g8tpr requesting resource cpu=250m on Node 172.31.51.96 +Sep 7 08:12:43.207: INFO: Pod coredns-84b58f6b4-xcj7z requesting resource cpu=100m on Node 172.31.51.97 +Sep 7 08:12:43.207: INFO: Pod dashboard-metrics-scraper-864d79d497-bchwd requesting resource cpu=0m on Node 172.31.51.97 +Sep 7 08:12:43.207: INFO: Pod kubernetes-dashboard-5fc74cf5c6-bsp7p requesting resource cpu=0m on Node 172.31.51.97 +Sep 7 08:12:43.207: INFO: Pod metrics-server-69797698d4-hndhm requesting resource cpu=100m on Node 172.31.51.97 +Sep 7 08:12:43.207: INFO: Pod node-local-dns-28994 requesting resource cpu=25m on Node 172.31.51.97 +Sep 7 08:12:43.207: INFO: Pod node-local-dns-8rwpt requesting resource cpu=25m on Node 172.31.51.96 +Sep 7 08:12:43.207: INFO: Pod pod0-1-sched-preemption-medium-priority requesting resource cpu=0m on Node 172.31.51.96 +Sep 7 08:12:43.207: INFO: Pod pod1-0-sched-preemption-medium-priority requesting resource cpu=0m on Node 172.31.51.97 +Sep 7 08:12:43.207: INFO: Pod pod1-1-sched-preemption-medium-priority requesting resource cpu=0m on Node 172.31.51.97 +Sep 7 08:12:43.207: INFO: Pod sonobuoy requesting resource cpu=0m on Node 172.31.51.96 +Sep 7 08:12:43.207: INFO: Pod sonobuoy-e2e-job-2f855b96e04a42ee requesting resource cpu=0m on Node 172.31.51.96 +Sep 7 08:12:43.207: INFO: Pod sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-kstch requesting resource cpu=0m on Node 172.31.51.96 +Sep 7 08:12:43.207: INFO: Pod sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-svvzn requesting resource cpu=0m on Node 172.31.51.97 +STEP: Starting Pods to consume most of the cluster CPU. +Sep 7 08:12:43.207: INFO: Creating a pod which consumes cpu=1207m on Node 172.31.51.96 +Sep 7 08:12:43.221: INFO: Creating a pod which consumes cpu=1067m on Node 172.31.51.97 +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4ee0f108-29f6-43cb-a826-79a024096177.171285d53628630d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8116/filler-pod-4ee0f108-29f6-43cb-a826-79a024096177 to 172.31.51.96] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4ee0f108-29f6-43cb-a826-79a024096177.171285d56ca4db83], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.7" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4ee0f108-29f6-43cb-a826-79a024096177.171285d56e3da730], Reason = [Created], Message = [Created container filler-pod-4ee0f108-29f6-43cb-a826-79a024096177] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-4ee0f108-29f6-43cb-a826-79a024096177.171285d5766c0454], Reason = [Started], Message = [Started container filler-pod-4ee0f108-29f6-43cb-a826-79a024096177] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ea23ec65-f1c9-4ec9-a4cc-2ff6f691f275.171285d53849281f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8116/filler-pod-ea23ec65-f1c9-4ec9-a4cc-2ff6f691f275 to 172.31.51.97] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ea23ec65-f1c9-4ec9-a4cc-2ff6f691f275.171285d560c0d0e7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.7" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ea23ec65-f1c9-4ec9-a4cc-2ff6f691f275.171285d561e92b97], Reason = [Created], Message = [Created container filler-pod-ea23ec65-f1c9-4ec9-a4cc-2ff6f691f275] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ea23ec65-f1c9-4ec9-a4cc-2ff6f691f275.171285d567f96425], Reason = [Started], Message = [Started container filler-pod-ea23ec65-f1c9-4ec9-a4cc-2ff6f691f275] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.171285d5b2103a9a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.] +STEP: removing the label node off the node 172.31.51.97 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node 172.31.51.96 +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:12:46.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-8116" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":356,"completed":137,"skipped":2545,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:12:46.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0777 on node default medium +Sep 7 08:12:46.420: INFO: Waiting up to 5m0s for pod "pod-3c4c0a69-6290-4acb-937a-92b0f2248447" in namespace "emptydir-5549" to be "Succeeded or Failed" +Sep 7 08:12:46.432: INFO: Pod "pod-3c4c0a69-6290-4acb-937a-92b0f2248447": Phase="Pending", Reason="", readiness=false. Elapsed: 11.987112ms +Sep 7 08:12:48.443: INFO: Pod "pod-3c4c0a69-6290-4acb-937a-92b0f2248447": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02311675s +Sep 7 08:12:50.460: INFO: Pod "pod-3c4c0a69-6290-4acb-937a-92b0f2248447": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039707394s +Sep 7 08:12:52.476: INFO: Pod "pod-3c4c0a69-6290-4acb-937a-92b0f2248447": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055953334s +STEP: Saw pod success +Sep 7 08:12:52.476: INFO: Pod "pod-3c4c0a69-6290-4acb-937a-92b0f2248447" satisfied condition "Succeeded or Failed" +Sep 7 08:12:52.481: INFO: Trying to get logs from node 172.31.51.96 pod pod-3c4c0a69-6290-4acb-937a-92b0f2248447 container test-container: +STEP: delete the pod +Sep 7 08:12:52.630: INFO: Waiting for pod pod-3c4c0a69-6290-4acb-937a-92b0f2248447 to disappear +Sep 7 08:12:52.704: INFO: Pod pod-3c4c0a69-6290-4acb-937a-92b0f2248447 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:12:52.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5549" for this suite. + +• [SLOW TEST:6.333 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":138,"skipped":2565,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:12:52.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0666 on node default medium +Sep 7 08:12:52.779: INFO: Waiting up to 5m0s for pod "pod-9197e3ad-1b8d-4292-9334-b2371c2bb6d6" in namespace "emptydir-6633" to be "Succeeded or Failed" +Sep 7 08:12:52.830: INFO: Pod "pod-9197e3ad-1b8d-4292-9334-b2371c2bb6d6": Phase="Pending", Reason="", readiness=false. Elapsed: 50.428482ms +Sep 7 08:12:54.852: INFO: Pod "pod-9197e3ad-1b8d-4292-9334-b2371c2bb6d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072443394s +Sep 7 08:12:56.862: INFO: Pod "pod-9197e3ad-1b8d-4292-9334-b2371c2bb6d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082892922s +STEP: Saw pod success +Sep 7 08:12:56.862: INFO: Pod "pod-9197e3ad-1b8d-4292-9334-b2371c2bb6d6" satisfied condition "Succeeded or Failed" +Sep 7 08:12:56.866: INFO: Trying to get logs from node 172.31.51.97 pod pod-9197e3ad-1b8d-4292-9334-b2371c2bb6d6 container test-container: +STEP: delete the pod +Sep 7 08:12:56.894: INFO: Waiting for pod pod-9197e3ad-1b8d-4292-9334-b2371c2bb6d6 to disappear +Sep 7 08:12:56.897: INFO: Pod pod-9197e3ad-1b8d-4292-9334-b2371c2bb6d6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:12:56.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6633" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":139,"skipped":2574,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:12:56.905: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] Deployment should have a working scale subresource [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:12:56.950: INFO: Creating simple deployment test-new-deployment +Sep 7 08:12:56.978: INFO: deployment "test-new-deployment" doesn't have the required revision set +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the deployment Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Sep 7 08:12:59.087: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-1838 44d3d759-49f8-4864-b8de-bfd56e4c8b6b 14426 3 2022-09-07 08:12:56 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2022-09-07 08:12:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00380f418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-09-07 08:12:58 +0000 UTC,LastTransitionTime:2022-09-07 08:12:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-55df494869" has successfully progressed.,LastUpdateTime:2022-09-07 08:12:58 +0000 UTC,LastTransitionTime:2022-09-07 08:12:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Sep 7 08:12:59.104: INFO: New ReplicaSet "test-new-deployment-55df494869" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-55df494869 deployment-1838 c4fd59b8-01c3-4f7e-9d41-4f09b2b2e35c 14434 3 2022-09-07 08:12:56 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 44d3d759-49f8-4864-b8de-bfd56e4c8b6b 0xc0037bb557 0xc0037bb558}] [] [{kube-controller-manager Update apps/v1 2022-09-07 08:12:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"44d3d759-49f8-4864-b8de-bfd56e4c8b6b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:12:58 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 55df494869,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037bb5f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Sep 7 08:12:59.120: INFO: Pod "test-new-deployment-55df494869-m4mvp" is available: +&Pod{ObjectMeta:{test-new-deployment-55df494869-m4mvp test-new-deployment-55df494869- deployment-1838 8f121cbc-4758-484e-8f3f-7aeaf85e1747 14421 0 2022-09-07 08:12:57 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet test-new-deployment-55df494869 c4fd59b8-01c3-4f7e-9d41-4f09b2b2e35c 0xc0037bba67 0xc0037bba68}] [] [{kube-controller-manager Update v1 2022-09-07 08:12:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4fd59b8-01c3-4f7e-9d41-4f09b2b2e35c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 08:12:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.75.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9xhlp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9xhlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:12:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:12:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:12:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:12:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:172.20.75.43,StartTime:2022-09-07 08:12:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 08:12:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://4017d27f9dea522aa97497abd57703328f7817558907ec37ce1649696b3c498d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.75.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Sep 7 08:12:59.120: INFO: Pod "test-new-deployment-55df494869-vwht4" is not available: +&Pod{ObjectMeta:{test-new-deployment-55df494869-vwht4 test-new-deployment-55df494869- deployment-1838 3bea0ec6-afc6-4488-99d6-925db97414c5 14429 0 2022-09-07 08:12:59 +0000 UTC map[name:httpd pod-template-hash:55df494869] map[] [{apps/v1 ReplicaSet test-new-deployment-55df494869 c4fd59b8-01c3-4f7e-9d41-4f09b2b2e35c 0xc0037bbcb7 0xc0037bbcb8}] [] [{kube-controller-manager Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4fd59b8-01c3-4f7e-9d41-4f09b2b2e35c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2wvkf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2wvkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:12:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:188 +Sep 7 08:12:59.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1838" for this suite. +•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":356,"completed":140,"skipped":2588,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:12:59.268: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Sep 7 08:12:59.328: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 466d74e2-4c90-4fe2-b5e5-29966409c7a6 14457 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 08:12:59.328: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 466d74e2-4c90-4fe2-b5e5-29966409c7a6 14457 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Sep 7 08:12:59.345: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 466d74e2-4c90-4fe2-b5e5-29966409c7a6 14459 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 08:12:59.346: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 466d74e2-4c90-4fe2-b5e5-29966409c7a6 14459 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Sep 7 08:12:59.368: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 466d74e2-4c90-4fe2-b5e5-29966409c7a6 14460 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 08:12:59.368: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 466d74e2-4c90-4fe2-b5e5-29966409c7a6 14460 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Sep 7 08:12:59.373: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 466d74e2-4c90-4fe2-b5e5-29966409c7a6 14461 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 08:12:59.373: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4394 466d74e2-4c90-4fe2-b5e5-29966409c7a6 14461 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Sep 7 08:12:59.379: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4394 d672df1b-be99-49db-a2a8-5f927802bdc9 14462 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 08:12:59.379: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4394 d672df1b-be99-49db-a2a8-5f927802bdc9 14462 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Sep 7 08:13:09.392: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4394 d672df1b-be99-49db-a2a8-5f927802bdc9 14537 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 08:13:09.392: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4394 d672df1b-be99-49db-a2a8-5f927802bdc9 14537 0 2022-09-07 08:12:59 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-09-07 08:12:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:188 +Sep 7 08:13:19.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-4394" for this suite. + +• [SLOW TEST:20.155 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":356,"completed":141,"skipped":2613,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:19.423: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/framework/framework.go:652 +STEP: create deployment with httpd image +Sep 7 08:13:19.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-7259 create -f -' +Sep 7 08:13:21.163: INFO: stderr: "" +Sep 7 08:13:21.163: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image +Sep 7 08:13:21.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-7259 diff -f -' +Sep 7 08:13:21.468: INFO: rc: 1 +Sep 7 08:13:21.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-7259 delete -f -' +Sep 7 08:13:21.567: INFO: stderr: "" +Sep 7 08:13:21.567: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:13:21.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7259" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":356,"completed":142,"skipped":2633,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-architecture] Conformance Tests + should have at least two untainted nodes [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-architecture] Conformance Tests + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:21.626: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename conformance-tests +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should have at least two untainted nodes [Conformance] + test/e2e/framework/framework.go:652 +STEP: Getting node addresses +Sep 7 08:13:21.761: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +[AfterEach] [sig-architecture] Conformance Tests + test/e2e/framework/framework.go:188 +Sep 7 08:13:21.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "conformance-tests-6405" for this suite. +•{"msg":"PASSED [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]","total":356,"completed":143,"skipped":2655,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:21.806: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name configmap-test-upd-79d920db-1fda-4b64-88c9-ba8ae50a11ce +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 08:13:23.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2057" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":356,"completed":144,"skipped":2677,"failed":0} +SSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:23.966: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +STEP: listing events with field selection filtering on source +STEP: listing events with field selection filtering on reportingController +STEP: getting the test event +STEP: patching the test event +STEP: getting the test event +STEP: updating the test event +STEP: getting the test event +STEP: deleting the test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +[AfterEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:188 +Sep 7 08:13:24.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-3430" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":356,"completed":145,"skipped":2686,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:24.103: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 08:13:24.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42629b81-38e1-4230-8f42-34bab4b8b7c5" in namespace "projected-8485" to be "Succeeded or Failed" +Sep 7 08:13:24.162: INFO: Pod "downwardapi-volume-42629b81-38e1-4230-8f42-34bab4b8b7c5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.246593ms +Sep 7 08:13:26.167: INFO: Pod "downwardapi-volume-42629b81-38e1-4230-8f42-34bab4b8b7c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019037151s +Sep 7 08:13:28.179: INFO: Pod "downwardapi-volume-42629b81-38e1-4230-8f42-34bab4b8b7c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030656602s +STEP: Saw pod success +Sep 7 08:13:28.179: INFO: Pod "downwardapi-volume-42629b81-38e1-4230-8f42-34bab4b8b7c5" satisfied condition "Succeeded or Failed" +Sep 7 08:13:28.181: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-42629b81-38e1-4230-8f42-34bab4b8b7c5 container client-container: +STEP: delete the pod +Sep 7 08:13:28.199: INFO: Waiting for pod downwardapi-volume-42629b81-38e1-4230-8f42-34bab4b8b7c5 to disappear +Sep 7 08:13:28.201: INFO: Pod downwardapi-volume-42629b81-38e1-4230-8f42-34bab4b8b7c5 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 08:13:28.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8485" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":356,"completed":146,"skipped":2735,"failed":0} +SSSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:28.207: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename limitrange +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a LimitRange +STEP: Setting up watch +STEP: Submitting a LimitRange +Sep 7 08:13:28.244: INFO: observed the limitRanges list +STEP: Verifying LimitRange creation was observed +STEP: Fetching the LimitRange to ensure it has proper values +Sep 7 08:13:28.250: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Sep 7 08:13:28.250: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements +STEP: Ensuring Pod has resource requirements applied from LimitRange +Sep 7 08:13:28.270: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Sep 7 08:13:28.270: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements +STEP: Ensuring Pod has merged resource requirements applied from LimitRange +Sep 7 08:13:28.279: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Sep 7 08:13:28.279: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources +STEP: Failing to create a Pod with more than max resources +STEP: Updating a LimitRange +STEP: Verifying LimitRange updating is effective +STEP: Creating a Pod with less than former min resources +STEP: Failing to create a Pod with more than max resources +STEP: Deleting a LimitRange +STEP: Verifying the LimitRange was deleted +Sep 7 08:13:35.360: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources +[AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/framework.go:188 +Sep 7 08:13:35.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-6604" for this suite. + +• [SLOW TEST:7.186 seconds] +[sig-scheduling] LimitRange +test/e2e/scheduling/framework.go:40 + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":356,"completed":147,"skipped":2739,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:35.394: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:13:35.427: INFO: Creating deployment "test-recreate-deployment" +Sep 7 08:13:35.434: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Sep 7 08:13:35.462: INFO: deployment "test-recreate-deployment" doesn't have the required revision set +Sep 7 08:13:37.475: INFO: Waiting deployment "test-recreate-deployment" to complete +Sep 7 08:13:37.479: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Sep 7 08:13:37.488: INFO: Updating deployment test-recreate-deployment +Sep 7 08:13:37.488: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Sep 7 08:13:37.651: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-9148 3aa823e6-939b-4def-9568-004d7509453c 14796 2 2022-09-07 08:13:35 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-09-07 08:13:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:13:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0053acaa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-09-07 08:13:37 +0000 UTC,LastTransitionTime:2022-09-07 08:13:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-cd8586fc7" is progressing.,LastUpdateTime:2022-09-07 08:13:37 +0000 UTC,LastTransitionTime:2022-09-07 08:13:35 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Sep 7 08:13:37.659: INFO: New ReplicaSet "test-recreate-deployment-cd8586fc7" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-cd8586fc7 deployment-9148 b41691c4-03b6-4301-bd8f-8be3233b4c6d 14795 1 2022-09-07 08:13:37 +0000 UTC map[name:sample-pod-3 pod-template-hash:cd8586fc7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3aa823e6-939b-4def-9568-004d7509453c 0xc004be5910 0xc004be5911}] [] [{kube-controller-manager Update apps/v1 2022-09-07 08:13:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aa823e6-939b-4def-9568-004d7509453c\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:13:37 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: cd8586fc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:cd8586fc7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004be59a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Sep 7 08:13:37.659: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Sep 7 08:13:37.659: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-845d658455 deployment-9148 228f81f4-895d-42e3-8ffd-933c519804f1 14785 2 2022-09-07 08:13:35 +0000 UTC map[name:sample-pod-3 pod-template-hash:845d658455] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3aa823e6-939b-4def-9568-004d7509453c 0xc004be57f7 0xc004be57f8}] [] [{kube-controller-manager Update apps/v1 2022-09-07 08:13:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aa823e6-939b-4def-9568-004d7509453c\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:13:37 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 845d658455,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:845d658455] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004be58a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Sep 7 08:13:37.663: INFO: Pod "test-recreate-deployment-cd8586fc7-8rjdk" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-cd8586fc7-8rjdk test-recreate-deployment-cd8586fc7- deployment-9148 78dc22c2-e94f-46ec-9e73-959da0fc161d 14797 0 2022-09-07 08:13:37 +0000 UTC map[name:sample-pod-3 pod-template-hash:cd8586fc7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-cd8586fc7 b41691c4-03b6-4301-bd8f-8be3233b4c6d 0xc0053ace40 0xc0053ace41}] [] [{kube-controller-manager Update v1 2022-09-07 08:13:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b41691c4-03b6-4301-bd8f-8be3233b4c6d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 08:13:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rl7qf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rl7qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:13:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:13:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:13:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:13:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:,StartTime:2022-09-07 08:13:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:188 +Sep 7 08:13:37.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9148" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":356,"completed":148,"skipped":2785,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:37.673: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:48 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:13:37.728: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-aaf1d1a9-d2ee-4744-b4ba-d0c341f14d09" in namespace "security-context-test-1281" to be "Succeeded or Failed" +Sep 7 08:13:37.740: INFO: Pod "busybox-privileged-false-aaf1d1a9-d2ee-4744-b4ba-d0c341f14d09": Phase="Pending", Reason="", readiness=false. Elapsed: 11.672643ms +Sep 7 08:13:39.752: INFO: Pod "busybox-privileged-false-aaf1d1a9-d2ee-4744-b4ba-d0c341f14d09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023714646s +Sep 7 08:13:41.762: INFO: Pod "busybox-privileged-false-aaf1d1a9-d2ee-4744-b4ba-d0c341f14d09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033935573s +Sep 7 08:13:43.775: INFO: Pod "busybox-privileged-false-aaf1d1a9-d2ee-4744-b4ba-d0c341f14d09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047087869s +Sep 7 08:13:43.775: INFO: Pod "busybox-privileged-false-aaf1d1a9-d2ee-4744-b4ba-d0c341f14d09" satisfied condition "Succeeded or Failed" +Sep 7 08:13:43.782: INFO: Got logs for pod "busybox-privileged-false-aaf1d1a9-d2ee-4744-b4ba-d0c341f14d09": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:188 +Sep 7 08:13:43.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-1281" for this suite. + +• [SLOW TEST:6.119 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a pod with privileged + test/e2e/common/node/security_context.go:234 + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":149,"skipped":2792,"failed":0} +SSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:43.792: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:40 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:13:43.847: INFO: The status of Pod busybox-host-aliases1f261f83-07d9-45e8-b356-aeb0941bf263 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:13:45.869: INFO: The status of Pod busybox-host-aliases1f261f83-07d9-45e8-b356-aeb0941bf263 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:13:47.859: INFO: The status of Pod busybox-host-aliases1f261f83-07d9-45e8-b356-aeb0941bf263 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:188 +Sep 7 08:13:47.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-2307" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":150,"skipped":2796,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:47.896: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should test the lifecycle of an Endpoint [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating an Endpoint +STEP: waiting for available Endpoint +STEP: listing all Endpoints +STEP: updating the Endpoint +STEP: fetching the Endpoint +STEP: patching the Endpoint +STEP: fetching the Endpoint +STEP: deleting the Endpoint by Collection +STEP: waiting for Endpoint deletion +STEP: fetching the Endpoint +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:13:47.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5739" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":356,"completed":151,"skipped":2891,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:13:48.008: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/framework/framework.go:652 +STEP: referencing a single matching pod +STEP: referencing matching pods with named port +STEP: creating empty Endpoints and EndpointSlices for no matching Pods +STEP: recreating EndpointSlices after they've been deleted +Sep 7 08:14:08.321: INFO: EndpointSlice for Service endpointslice-4361/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:188 +Sep 7 08:14:18.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-4361" for this suite. + +• [SLOW TEST:30.357 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":356,"completed":152,"skipped":2913,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:14:18.365: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward api env vars +Sep 7 08:14:18.431: INFO: Waiting up to 5m0s for pod "downward-api-4d6330dc-1960-45ed-95d3-05cec76db70f" in namespace "downward-api-6100" to be "Succeeded or Failed" +Sep 7 08:14:18.441: INFO: Pod "downward-api-4d6330dc-1960-45ed-95d3-05cec76db70f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.619086ms +Sep 7 08:14:20.450: INFO: Pod "downward-api-4d6330dc-1960-45ed-95d3-05cec76db70f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018605567s +Sep 7 08:14:22.461: INFO: Pod "downward-api-4d6330dc-1960-45ed-95d3-05cec76db70f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030210215s +STEP: Saw pod success +Sep 7 08:14:22.461: INFO: Pod "downward-api-4d6330dc-1960-45ed-95d3-05cec76db70f" satisfied condition "Succeeded or Failed" +Sep 7 08:14:22.465: INFO: Trying to get logs from node 172.31.51.96 pod downward-api-4d6330dc-1960-45ed-95d3-05cec76db70f container dapi-container: +STEP: delete the pod +Sep 7 08:14:22.493: INFO: Waiting for pod downward-api-4d6330dc-1960-45ed-95d3-05cec76db70f to disappear +Sep 7 08:14:22.499: INFO: Pod downward-api-4d6330dc-1960-45ed-95d3-05cec76db70f no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:188 +Sep 7 08:14:22.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6100" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":356,"completed":153,"skipped":2950,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:14:22.509: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating projection with secret that has name projected-secret-test-67e436c6-def4-4b02-8b3b-545d7d54e254 +STEP: Creating a pod to test consume secrets +Sep 7 08:14:22.624: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5b8aaa0a-a559-4869-ba43-77cba30917e3" in namespace "projected-2334" to be "Succeeded or Failed" +Sep 7 08:14:22.629: INFO: Pod "pod-projected-secrets-5b8aaa0a-a559-4869-ba43-77cba30917e3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.573767ms +Sep 7 08:14:24.647: INFO: Pod "pod-projected-secrets-5b8aaa0a-a559-4869-ba43-77cba30917e3": Phase="Running", Reason="", readiness=false. Elapsed: 2.023534445s +Sep 7 08:14:26.655: INFO: Pod "pod-projected-secrets-5b8aaa0a-a559-4869-ba43-77cba30917e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031531995s +STEP: Saw pod success +Sep 7 08:14:26.655: INFO: Pod "pod-projected-secrets-5b8aaa0a-a559-4869-ba43-77cba30917e3" satisfied condition "Succeeded or Failed" +Sep 7 08:14:26.659: INFO: Trying to get logs from node 172.31.51.97 pod pod-projected-secrets-5b8aaa0a-a559-4869-ba43-77cba30917e3 container projected-secret-volume-test: +STEP: delete the pod +Sep 7 08:14:26.673: INFO: Waiting for pod pod-projected-secrets-5b8aaa0a-a559-4869-ba43-77cba30917e3 to disappear +Sep 7 08:14:26.681: INFO: Pod pod-projected-secrets-5b8aaa0a-a559-4869-ba43-77cba30917e3 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:188 +Sep 7 08:14:26.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2334" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":154,"skipped":2989,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:14:26.688: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name secret-test-a134844b-4157-4ffe-8c43-23ce40b8ad10 +STEP: Creating a pod to test consume secrets +Sep 7 08:14:26.742: INFO: Waiting up to 5m0s for pod "pod-secrets-71ad20f9-0466-4c92-9f0b-b37b27c2d5b3" in namespace "secrets-7441" to be "Succeeded or Failed" +Sep 7 08:14:26.767: INFO: Pod "pod-secrets-71ad20f9-0466-4c92-9f0b-b37b27c2d5b3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.476057ms +Sep 7 08:14:28.780: INFO: Pod "pod-secrets-71ad20f9-0466-4c92-9f0b-b37b27c2d5b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038104005s +Sep 7 08:14:30.784: INFO: Pod "pod-secrets-71ad20f9-0466-4c92-9f0b-b37b27c2d5b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042345895s +STEP: Saw pod success +Sep 7 08:14:30.785: INFO: Pod "pod-secrets-71ad20f9-0466-4c92-9f0b-b37b27c2d5b3" satisfied condition "Succeeded or Failed" +Sep 7 08:14:30.788: INFO: Trying to get logs from node 172.31.51.96 pod pod-secrets-71ad20f9-0466-4c92-9f0b-b37b27c2d5b3 container secret-volume-test: +STEP: delete the pod +Sep 7 08:14:30.805: INFO: Waiting for pod pod-secrets-71ad20f9-0466-4c92-9f0b-b37b27c2d5b3 to disappear +Sep 7 08:14:30.809: INFO: Pod pod-secrets-71ad20f9-0466-4c92-9f0b-b37b27c2d5b3 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:188 +Sep 7 08:14:30.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7441" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":155,"skipped":3020,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:14:30.817: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a ForbidConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring no more jobs are scheduled +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:188 +Sep 7 08:20:00.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-9436" for this suite. + +• [SLOW TEST:330.141 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":356,"completed":156,"skipped":3036,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:20:00.958: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name s-test-opt-del-e8edcbed-f809-4039-ba89-80f38ccdb907 +STEP: Creating secret with name s-test-opt-upd-98761403-943f-460a-bb24-54b42be43de8 +STEP: Creating the pod +Sep 7 08:20:01.068: INFO: The status of Pod pod-projected-secrets-afb52e00-b09a-439c-9046-511a9dc5aee7 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:20:03.088: INFO: The status of Pod pod-projected-secrets-afb52e00-b09a-439c-9046-511a9dc5aee7 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-e8edcbed-f809-4039-ba89-80f38ccdb907 +STEP: Updating secret s-test-opt-upd-98761403-943f-460a-bb24-54b42be43de8 +STEP: Creating secret with name s-test-opt-create-04b5b417-9081-4f8e-89d4-3166638f3eeb +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:188 +Sep 7 08:20:05.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7807" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":356,"completed":157,"skipped":3041,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:20:05.204: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/framework/framework.go:652 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Sep 7 08:20:05.236: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:20:08.725: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:20:25.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4860" for this suite. + +• [SLOW TEST:19.964 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":356,"completed":158,"skipped":3043,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:20:25.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename taint-single-pod +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/node/taints.go:166 +Sep 7 08:20:25.210: INFO: Waiting up to 1m0s for all nodes to be ready +Sep 7 08:21:25.230: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:21:25.234: INFO: Starting informer... +STEP: Starting pod... +Sep 7 08:21:25.451: INFO: Pod is running on 172.31.51.96. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Sep 7 08:21:25.469: INFO: Pod wasn't evicted. Proceeding +Sep 7 08:21:25.469: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Sep 7 08:22:40.497: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:22:40.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-562" for this suite. + +• [SLOW TEST:135.361 seconds] +[sig-node] NoExecuteTaintManager Single Pod [Serial] +test/e2e/node/framework.go:23 + removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":356,"completed":159,"skipped":3085,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:22:40.529: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0644 on node default medium +Sep 7 08:22:40.584: INFO: Waiting up to 5m0s for pod "pod-1f05d4cb-b665-46f7-bfdd-b96abe0f9bef" in namespace "emptydir-3651" to be "Succeeded or Failed" +Sep 7 08:22:40.594: INFO: Pod "pod-1f05d4cb-b665-46f7-bfdd-b96abe0f9bef": Phase="Pending", Reason="", readiness=false. Elapsed: 10.144975ms +Sep 7 08:22:42.605: INFO: Pod "pod-1f05d4cb-b665-46f7-bfdd-b96abe0f9bef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02122992s +Sep 7 08:22:44.617: INFO: Pod "pod-1f05d4cb-b665-46f7-bfdd-b96abe0f9bef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03326648s +STEP: Saw pod success +Sep 7 08:22:44.617: INFO: Pod "pod-1f05d4cb-b665-46f7-bfdd-b96abe0f9bef" satisfied condition "Succeeded or Failed" +Sep 7 08:22:44.619: INFO: Trying to get logs from node 172.31.51.96 pod pod-1f05d4cb-b665-46f7-bfdd-b96abe0f9bef container test-container: +STEP: delete the pod +Sep 7 08:22:44.645: INFO: Waiting for pod pod-1f05d4cb-b665-46f7-bfdd-b96abe0f9bef to disappear +Sep 7 08:22:44.652: INFO: Pod pod-1f05d4cb-b665-46f7-bfdd-b96abe0f9bef no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:22:44.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3651" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":160,"skipped":3093,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:22:44.663: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 +[It] should delete a collection of events [Conformance] + test/e2e/framework/framework.go:652 +STEP: Create set of events +STEP: get a list of Events with a label in the current namespace +STEP: delete a list of events +Sep 7 08:22:44.741: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +[AfterEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:188 +Sep 7 08:22:44.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-7246" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":356,"completed":161,"skipped":3186,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:22:44.786: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 08:22:44.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2503fd2-e855-4fc8-ba15-d4d16eec3287" in namespace "projected-1791" to be "Succeeded or Failed" +Sep 7 08:22:44.871: INFO: Pod "downwardapi-volume-f2503fd2-e855-4fc8-ba15-d4d16eec3287": Phase="Pending", Reason="", readiness=false. Elapsed: 22.239417ms +Sep 7 08:22:46.886: INFO: Pod "downwardapi-volume-f2503fd2-e855-4fc8-ba15-d4d16eec3287": Phase="Running", Reason="", readiness=true. Elapsed: 2.036862637s +Sep 7 08:22:48.901: INFO: Pod "downwardapi-volume-f2503fd2-e855-4fc8-ba15-d4d16eec3287": Phase="Running", Reason="", readiness=false. Elapsed: 4.052374304s +Sep 7 08:22:50.913: INFO: Pod "downwardapi-volume-f2503fd2-e855-4fc8-ba15-d4d16eec3287": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063958208s +STEP: Saw pod success +Sep 7 08:22:50.913: INFO: Pod "downwardapi-volume-f2503fd2-e855-4fc8-ba15-d4d16eec3287" satisfied condition "Succeeded or Failed" +Sep 7 08:22:50.917: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-f2503fd2-e855-4fc8-ba15-d4d16eec3287 container client-container: +STEP: delete the pod +Sep 7 08:22:50.962: INFO: Waiting for pod downwardapi-volume-f2503fd2-e855-4fc8-ba15-d4d16eec3287 to disappear +Sep 7 08:22:50.975: INFO: Pod downwardapi-volume-f2503fd2-e855-4fc8-ba15-d4d16eec3287 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 08:22:50.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1791" for this suite. + +• [SLOW TEST:6.209 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":356,"completed":162,"skipped":3192,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:22:50.996: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap configmap-2275/configmap-test-3ff335c8-27f2-42f8-a371-48a3896ff39c +STEP: Creating a pod to test consume configMaps +Sep 7 08:22:51.063: INFO: Waiting up to 5m0s for pod "pod-configmaps-74d39fa7-e2b9-409f-874f-c21ca2c855c9" in namespace "configmap-2275" to be "Succeeded or Failed" +Sep 7 08:22:51.104: INFO: Pod "pod-configmaps-74d39fa7-e2b9-409f-874f-c21ca2c855c9": Phase="Pending", Reason="", readiness=false. Elapsed: 40.313015ms +Sep 7 08:22:53.117: INFO: Pod "pod-configmaps-74d39fa7-e2b9-409f-874f-c21ca2c855c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053385428s +Sep 7 08:22:55.130: INFO: Pod "pod-configmaps-74d39fa7-e2b9-409f-874f-c21ca2c855c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066310798s +STEP: Saw pod success +Sep 7 08:22:55.130: INFO: Pod "pod-configmaps-74d39fa7-e2b9-409f-874f-c21ca2c855c9" satisfied condition "Succeeded or Failed" +Sep 7 08:22:55.133: INFO: Trying to get logs from node 172.31.51.96 pod pod-configmaps-74d39fa7-e2b9-409f-874f-c21ca2c855c9 container env-test: +STEP: delete the pod +Sep 7 08:22:55.171: INFO: Waiting for pod pod-configmaps-74d39fa7-e2b9-409f-874f-c21ca2c855c9 to disappear +Sep 7 08:22:55.175: INFO: Pod pod-configmaps-74d39fa7-e2b9-409f-874f-c21ca2c855c9 no longer exists +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 08:22:55.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2275" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":356,"completed":163,"skipped":3304,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:22:55.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should create services for rc [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating Agnhost RC +Sep 7 08:22:55.229: INFO: namespace kubectl-9103 +Sep 7 08:22:55.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-9103 create -f -' +Sep 7 08:22:56.614: INFO: stderr: "" +Sep 7 08:22:56.614: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Sep 7 08:22:57.623: INFO: Selector matched 1 pods for map[app:agnhost] +Sep 7 08:22:57.623: INFO: Found 0 / 1 +Sep 7 08:22:58.621: INFO: Selector matched 1 pods for map[app:agnhost] +Sep 7 08:22:58.621: INFO: Found 1 / 1 +Sep 7 08:22:58.621: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Sep 7 08:22:58.629: INFO: Selector matched 1 pods for map[app:agnhost] +Sep 7 08:22:58.629: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Sep 7 08:22:58.629: INFO: wait on agnhost-primary startup in kubectl-9103 +Sep 7 08:22:58.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-9103 logs agnhost-primary-k6t82 agnhost-primary' +Sep 7 08:22:58.734: INFO: stderr: "" +Sep 7 08:22:58.734: INFO: stdout: "Paused\n" +STEP: exposing RC +Sep 7 08:22:58.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-9103 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Sep 7 08:22:58.864: INFO: stderr: "" +Sep 7 08:22:58.864: INFO: stdout: "service/rm2 exposed\n" +Sep 7 08:22:58.874: INFO: Service rm2 in namespace kubectl-9103 found. +STEP: exposing service +Sep 7 08:23:00.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-9103 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Sep 7 08:23:01.013: INFO: stderr: "" +Sep 7 08:23:01.013: INFO: stdout: "service/rm3 exposed\n" +Sep 7 08:23:01.039: INFO: Service rm3 in namespace kubectl-9103 found. +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:23:03.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9103" for this suite. + +• [SLOW TEST:7.872 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl expose + test/e2e/kubectl/kubectl.go:1249 + should create services for rc [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":356,"completed":164,"skipped":3326,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:23:03.056: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] should run the lifecycle of a Deployment [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a Deployment +STEP: waiting for Deployment to be created +STEP: waiting for all Replicas to be Ready +Sep 7 08:23:03.115: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Sep 7 08:23:03.115: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Sep 7 08:23:03.128: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Sep 7 08:23:03.128: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Sep 7 08:23:03.150: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Sep 7 08:23:03.150: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Sep 7 08:23:03.242: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Sep 7 08:23:03.242: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Sep 7 08:23:04.545: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Sep 7 08:23:04.545: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Sep 7 08:23:04.573: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment +Sep 7 08:23:04.586: INFO: observed event type ADDED +STEP: waiting for Replicas to scale +Sep 7 08:23:04.587: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 +Sep 7 08:23:04.587: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 +Sep 7 08:23:04.587: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 +Sep 7 08:23:04.587: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 +Sep 7 08:23:04.588: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 +Sep 7 08:23:04.588: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 +Sep 7 08:23:04.588: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 +Sep 7 08:23:04.588: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 0 +Sep 7 08:23:04.588: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +Sep 7 08:23:04.588: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +Sep 7 08:23:04.588: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:04.588: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:04.588: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:04.588: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:04.608: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:04.608: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:04.657: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:04.657: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:04.679: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +Sep 7 08:23:04.679: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +Sep 7 08:23:06.638: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:06.638: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:06.668: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +STEP: listing Deployments +Sep 7 08:23:06.675: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment +Sep 7 08:23:06.691: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus +Sep 7 08:23:06.707: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Sep 7 08:23:06.729: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Sep 7 08:23:06.767: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Sep 7 08:23:06.821: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Sep 7 08:23:06.836: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Sep 7 08:23:08.573: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Sep 7 08:23:09.703: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Sep 7 08:23:09.836: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Sep 7 08:23:09.845: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Sep 7 08:23:11.577: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus +STEP: fetching the DeploymentStatus +Sep 7 08:23:11.627: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +Sep 7 08:23:11.627: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +Sep 7 08:23:11.627: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +Sep 7 08:23:11.627: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +Sep 7 08:23:11.627: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 1 +Sep 7 08:23:11.627: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:11.627: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 3 +Sep 7 08:23:11.627: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 3 +Sep 7 08:23:11.627: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 2 +Sep 7 08:23:11.627: INFO: observed Deployment test-deployment in namespace deployment-758 with ReadyReplicas 3 +STEP: deleting the Deployment +Sep 7 08:23:11.636: INFO: observed event type MODIFIED +Sep 7 08:23:11.636: INFO: observed event type MODIFIED +Sep 7 08:23:11.636: INFO: observed event type MODIFIED +Sep 7 08:23:11.642: INFO: observed event type MODIFIED +Sep 7 08:23:11.642: INFO: observed event type MODIFIED +Sep 7 08:23:11.642: INFO: observed event type MODIFIED +Sep 7 08:23:11.642: INFO: observed event type MODIFIED +Sep 7 08:23:11.642: INFO: observed event type MODIFIED +Sep 7 08:23:11.642: INFO: observed event type MODIFIED +Sep 7 08:23:11.643: INFO: observed event type MODIFIED +Sep 7 08:23:11.643: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Sep 7 08:23:11.648: INFO: Log out all the ReplicaSets if there is no deployment created +Sep 7 08:23:11.654: INFO: ReplicaSet "test-deployment-6b48c869b6": +&ReplicaSet{ObjectMeta:{test-deployment-6b48c869b6 deployment-758 141d1895-9aa9-47b1-bbe3-ad734b700ba0 16283 3 2022-09-07 08:23:03 +0000 UTC map[pod-template-hash:6b48c869b6 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 55c37379-45e4-4acb-9027-322638ad01cd 0xc0044da7c7 0xc0044da7c8}] [] [{kube-controller-manager Update apps/v1 2022-09-07 08:23:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c37379-45e4-4acb-9027-322638ad01cd\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:23:06 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 6b48c869b6,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:6b48c869b6 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044da850 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Sep 7 08:23:11.664: INFO: ReplicaSet "test-deployment-74c6dd549b": +&ReplicaSet{ObjectMeta:{test-deployment-74c6dd549b deployment-758 e23560da-8634-4a4e-aa03-b66595c652ab 16401 2 2022-09-07 08:23:06 +0000 UTC map[pod-template-hash:74c6dd549b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 55c37379-45e4-4acb-9027-322638ad01cd 0xc0044da8b7 0xc0044da8b8}] [] [{kube-controller-manager Update apps/v1 2022-09-07 08:23:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"55c37379-45e4-4acb-9027-322638ad01cd\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:23:09 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 74c6dd549b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:74c6dd549b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044da940 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Sep 7 08:23:11.684: INFO: pod: "test-deployment-74c6dd549b-d6qjs": +&Pod{ObjectMeta:{test-deployment-74c6dd549b-d6qjs test-deployment-74c6dd549b- deployment-758 8020a842-8314-48d1-8727-54b062b6ca04 16416 0 2022-09-07 08:23:06 +0000 UTC 2022-09-07 08:23:12 +0000 UTC 0xc0044dabd8 map[pod-template-hash:74c6dd549b test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-74c6dd549b e23560da-8634-4a4e-aa03-b66595c652ab 0xc0044dac07 0xc0044dac08}] [] [{kube-controller-manager Update v1 2022-09-07 08:23:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e23560da-8634-4a4e-aa03-b66595c652ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 08:23:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.75.61\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fqsq8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqsq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:172.20.75.61,StartTime:2022-09-07 08:23:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 08:23:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://03a22d7dba1487a538dd4a5ba95529238cfff50169e5e6575d1379d44bc0d8f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.75.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Sep 7 08:23:11.684: INFO: pod: "test-deployment-74c6dd549b-zvjjt": +&Pod{ObjectMeta:{test-deployment-74c6dd549b-zvjjt test-deployment-74c6dd549b- deployment-758 81deede5-6b4a-45e7-89a0-032fc5c3cb56 16415 0 2022-09-07 08:23:09 +0000 UTC 2022-09-07 08:23:12 +0000 UTC 0xc0044dadd0 map[pod-template-hash:74c6dd549b test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-74c6dd549b e23560da-8634-4a4e-aa03-b66595c652ab 0xc0044dae07 0xc0044dae08}] [] [{kube-controller-manager Update v1 2022-09-07 08:23:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e23560da-8634-4a4e-aa03-b66595c652ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 08:23:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.97.70\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zhn4d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhn4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.97,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.97,PodIP:172.20.97.70,StartTime:2022-09-07 08:23:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 08:23:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://641b7a3f8db14d029d295a1e6fc3a77b129c9a8b02879bf82161ee907504c4fe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.97.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:188 +Sep 7 08:23:11.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-758" for this suite. + +• [SLOW TEST:8.661 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + should run the lifecycle of a Deployment [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":356,"completed":165,"skipped":3333,"failed":0} +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:23:11.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:23:12.562: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Sep 7 08:23:14.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 12, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 12, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:23:17.606: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a validating webhook configuration +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Updating a validating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Patching a validating webhook configuration's rules to include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:23:17.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4888" for this suite. +STEP: Destroying namespace "webhook-4888-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:6.046 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + patching/updating a validating webhook should work [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":356,"completed":166,"skipped":3333,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:23:17.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Sep 7 08:23:17.855: INFO: Waiting up to 5m0s for pod "pod-15fcfe4d-2b50-4dbb-96d8-46cf717a1964" in namespace "emptydir-4456" to be "Succeeded or Failed" +Sep 7 08:23:17.882: INFO: Pod "pod-15fcfe4d-2b50-4dbb-96d8-46cf717a1964": Phase="Pending", Reason="", readiness=false. Elapsed: 26.912541ms +Sep 7 08:23:19.906: INFO: Pod "pod-15fcfe4d-2b50-4dbb-96d8-46cf717a1964": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051082032s +Sep 7 08:23:21.913: INFO: Pod "pod-15fcfe4d-2b50-4dbb-96d8-46cf717a1964": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057948651s +STEP: Saw pod success +Sep 7 08:23:21.913: INFO: Pod "pod-15fcfe4d-2b50-4dbb-96d8-46cf717a1964" satisfied condition "Succeeded or Failed" +Sep 7 08:23:21.919: INFO: Trying to get logs from node 172.31.51.96 pod pod-15fcfe4d-2b50-4dbb-96d8-46cf717a1964 container test-container: +STEP: delete the pod +Sep 7 08:23:21.947: INFO: Waiting for pod pod-15fcfe4d-2b50-4dbb-96d8-46cf717a1964 to disappear +Sep 7 08:23:21.950: INFO: Pod pod-15fcfe4d-2b50-4dbb-96d8-46cf717a1964 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:23:21.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4456" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":167,"skipped":3346,"failed":0} +SSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:23:21.964: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name secret-test-7251a5d7-270e-4223-8b54-25d11dbe396d +STEP: Creating a pod to test consume secrets +Sep 7 08:23:22.058: INFO: Waiting up to 5m0s for pod "pod-secrets-37ca036b-457c-402b-a252-b436c0f55b95" in namespace "secrets-2391" to be "Succeeded or Failed" +Sep 7 08:23:22.078: INFO: Pod "pod-secrets-37ca036b-457c-402b-a252-b436c0f55b95": Phase="Pending", Reason="", readiness=false. Elapsed: 19.609621ms +Sep 7 08:23:24.089: INFO: Pod "pod-secrets-37ca036b-457c-402b-a252-b436c0f55b95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030509856s +Sep 7 08:23:26.092: INFO: Pod "pod-secrets-37ca036b-457c-402b-a252-b436c0f55b95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034266087s +STEP: Saw pod success +Sep 7 08:23:26.092: INFO: Pod "pod-secrets-37ca036b-457c-402b-a252-b436c0f55b95" satisfied condition "Succeeded or Failed" +Sep 7 08:23:26.095: INFO: Trying to get logs from node 172.31.51.96 pod pod-secrets-37ca036b-457c-402b-a252-b436c0f55b95 container secret-volume-test: +STEP: delete the pod +Sep 7 08:23:26.114: INFO: Waiting for pod pod-secrets-37ca036b-457c-402b-a252-b436c0f55b95 to disappear +Sep 7 08:23:26.117: INFO: Pod pod-secrets-37ca036b-457c-402b-a252-b436c0f55b95 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:188 +Sep 7 08:23:26.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2391" for this suite. +STEP: Destroying namespace "secret-namespace-7279" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":356,"completed":168,"skipped":3349,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:23:26.136: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename runtimeclass +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:188 +Sep 7 08:23:26.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-7884" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]","total":356,"completed":169,"skipped":3396,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:23:26.285: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should run and stop simple daemon [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Sep 7 08:23:26.387: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:23:26.387: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:23:27.443: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:23:27.443: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:23:28.398: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:23:28.398: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:23:29.399: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Sep 7 08:23:29.399: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Stop a daemon pod, check that the daemon pod is revived. +Sep 7 08:23:29.419: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:23:29.419: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:23:30.430: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:23:30.430: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:23:31.482: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:23:31.482: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:23:32.465: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:23:32.465: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:23:33.453: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:23:33.453: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:23:34.432: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:23:34.432: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:23:35.449: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Sep 7 08:23:35.449: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2255, will wait for the garbage collector to delete the pods +Sep 7 08:23:35.513: INFO: Deleting DaemonSet.extensions daemon-set took: 7.098404ms +Sep 7 08:23:35.614: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.913023ms +Sep 7 08:23:39.143: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:23:39.143: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Sep 7 08:23:39.147: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"16793"},"items":null} + +Sep 7 08:23:39.150: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"16793"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:23:39.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-2255" for this suite. + +• [SLOW TEST:12.879 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should run and stop simple daemon [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":356,"completed":170,"skipped":3407,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:23:39.164: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:188 +Sep 7 08:23:46.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7761" for this suite. + +• [SLOW TEST:7.076 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":356,"completed":171,"skipped":3414,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:23:46.241: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should support proxy with --port 0 [Conformance] + test/e2e/framework/framework.go:652 +STEP: starting the proxy server +Sep 7 08:23:46.293: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-1797 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:23:46.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1797" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":356,"completed":172,"skipped":3477,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:23:46.427: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should support rollover [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:23:46.527: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Sep 7 08:23:51.541: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Sep 7 08:23:51.541: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Sep 7 08:23:53.554: INFO: Creating deployment "test-rollover-deployment" +Sep 7 08:23:53.576: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Sep 7 08:23:55.600: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Sep 7 08:23:55.606: INFO: Ensure that both replica sets have 1 created replica +Sep 7 08:23:55.609: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Sep 7 08:23:55.623: INFO: Updating deployment test-rollover-deployment +Sep 7 08:23:55.623: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Sep 7 08:23:57.634: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Sep 7 08:23:57.639: INFO: Make sure deployment "test-rollover-deployment" is complete +Sep 7 08:23:57.644: INFO: all replica sets need to contain the pod-template-hash label +Sep 7 08:23:57.644: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 55, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-779c67f4f8\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:23:59.661: INFO: all replica sets need to contain the pod-template-hash label +Sep 7 08:23:59.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-779c67f4f8\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:24:01.653: INFO: all replica sets need to contain the pod-template-hash label +Sep 7 08:24:01.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-779c67f4f8\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:24:03.661: INFO: all replica sets need to contain the pod-template-hash label +Sep 7 08:24:03.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-779c67f4f8\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:24:05.652: INFO: all replica sets need to contain the pod-template-hash label +Sep 7 08:24:05.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-779c67f4f8\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:24:07.655: INFO: all replica sets need to contain the pod-template-hash label +Sep 7 08:24:07.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 23, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 23, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-779c67f4f8\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:24:09.665: INFO: +Sep 7 08:24:09.665: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Sep 7 08:24:09.674: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-4317 35213396-bc9d-4bff-a1b4-5a3bec7f12c5 16976 2 2022-09-07 08:23:53 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-09-07 08:23:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:24:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000a95408 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-09-07 08:23:53 +0000 UTC,LastTransitionTime:2022-09-07 08:23:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-779c67f4f8" has successfully progressed.,LastUpdateTime:2022-09-07 08:24:08 +0000 UTC,LastTransitionTime:2022-09-07 08:23:53 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Sep 7 08:24:09.684: INFO: New ReplicaSet "test-rollover-deployment-779c67f4f8" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-779c67f4f8 deployment-4317 98450dcb-8e16-4c95-802b-eb36aa2035bc 16966 2 2022-09-07 08:23:55 +0000 UTC map[name:rollover-pod pod-template-hash:779c67f4f8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 35213396-bc9d-4bff-a1b4-5a3bec7f12c5 0xc0033484d7 0xc0033484d8}] [] [{kube-controller-manager Update apps/v1 2022-09-07 08:23:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35213396-bc9d-4bff-a1b4-5a3bec7f12c5\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:24:08 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 779c67f4f8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:779c67f4f8] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003348588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Sep 7 08:24:09.684: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Sep 7 08:24:09.684: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4317 f0083683-b5db-4135-b282-5c101b0a9883 16975 2 2022-09-07 08:23:46 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 35213396-bc9d-4bff-a1b4-5a3bec7f12c5 0xc003348397 0xc003348398}] [] [{e2e.test Update apps/v1 2022-09-07 08:23:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:24:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35213396-bc9d-4bff-a1b4-5a3bec7f12c5\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:24:08 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003348468 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Sep 7 08:24:09.684: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-87f8f6dcf deployment-4317 6ea6bfa3-c088-415f-bcf8-5066c664e656 16935 2 2022-09-07 08:23:53 +0000 UTC map[name:rollover-pod pod-template-hash:87f8f6dcf] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 35213396-bc9d-4bff-a1b4-5a3bec7f12c5 0xc0033485f0 0xc0033485f1}] [] [{kube-controller-manager Update apps/v1 2022-09-07 08:23:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35213396-bc9d-4bff-a1b4-5a3bec7f12c5\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-07 08:23:55 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 87f8f6dcf,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:87f8f6dcf] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0033486a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Sep 7 08:24:09.688: INFO: Pod "test-rollover-deployment-779c67f4f8-cstzf" is available: +&Pod{ObjectMeta:{test-rollover-deployment-779c67f4f8-cstzf test-rollover-deployment-779c67f4f8- deployment-4317 caefd0f9-ba5a-467e-9a02-2d5592f0c323 16950 0 2022-09-07 08:23:55 +0000 UTC map[name:rollover-pod pod-template-hash:779c67f4f8] map[] [{apps/v1 ReplicaSet test-rollover-deployment-779c67f4f8 98450dcb-8e16-4c95-802b-eb36aa2035bc 0xc000a957c7 0xc000a957c8}] [] [{kube-controller-manager Update v1 2022-09-07 08:23:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98450dcb-8e16-4c95-802b-eb36aa2035bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-07 08:23:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.75.11\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t54ww,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t54ww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.31.51.96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-07 08:23:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.31.51.96,PodIP:172.20.75.11,StartTime:2022-09-07 08:23:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-07 08:23:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://6c956f84e9e7b2431b13a1ec973a66c4d39ccfeb2b2f82279c4d1eeb056009d7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.75.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:188 +Sep 7 08:24:09.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4317" for this suite. + +• [SLOW TEST:23.275 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should support rollover [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":356,"completed":173,"skipped":3494,"failed":0} +SSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:24:09.701: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 +Sep 7 08:24:09.775: INFO: Waiting up to 1m0s for all nodes to be ready +Sep 7 08:25:09.803: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:25:09.805: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:690 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:25:09.869: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. +Sep 7 08:25:09.872: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + test/e2e/framework/framework.go:188 +Sep 7 08:25:09.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-6453" for this suite. +[AfterEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:706 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:25:09.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-5661" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 + +• [SLOW TEST:60.236 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + PriorityClass endpoints + test/e2e/scheduling/preemption.go:683 + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":356,"completed":174,"skipped":3497,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:25:09.938: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:25:09.964: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename disruption-2 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/framework/framework.go:652 +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: listing a collection of PDBs across all namespaces +STEP: listing a collection of PDBs in namespace disruption-2396 +STEP: deleting a collection of PDBs +STEP: Waiting for the PDB collection to be deleted +[AfterEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/framework.go:188 +Sep 7 08:25:16.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2-4888" for this suite. +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:188 +Sep 7 08:25:16.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2396" for this suite. + +• [SLOW TEST:6.156 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + Listing PodDisruptionBudgets for all namespaces + test/e2e/apps/disruption.go:77 + should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":356,"completed":175,"skipped":3526,"failed":0} +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:25:16.094: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:25:16.582: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:25:19.608: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/framework/framework.go:652 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API +STEP: create a namespace for the webhook +STEP: create a configmap should be unconditionally rejected by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:25:19.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1900" for this suite. +STEP: Destroying namespace "webhook-1900-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":356,"completed":176,"skipped":3526,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:25:19.838: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should include custom resource definition resources in discovery documents [Conformance] + test/e2e/framework/framework.go:652 +STEP: fetching the /apis discovery document +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/apiextensions.k8s.io discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:25:19.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-9040" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":356,"completed":177,"skipped":3531,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:25:19.942: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Sep 7 08:25:24.056: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:188 +Sep 7 08:25:24.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-8552" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":356,"completed":178,"skipped":3556,"failed":0} +SSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:25:24.077: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 +Sep 7 08:25:24.127: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Sep 7 08:25:24.140: INFO: Waiting for terminating namespaces to be deleted... +Sep 7 08:25:24.143: INFO: +Logging pods the apiserver thinks is on node 172.31.51.96 before test +Sep 7 08:25:24.148: INFO: calico-node-g8tpr from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 08:25:24.148: INFO: Container calico-node ready: true, restart count 0 +Sep 7 08:25:24.148: INFO: node-local-dns-8rwpt from kube-system started at 2022-09-07 07:27:42 +0000 UTC (1 container statuses recorded) +Sep 7 08:25:24.148: INFO: Container node-cache ready: true, restart count 0 +Sep 7 08:25:24.148: INFO: sonobuoy from sonobuoy started at 2022-09-07 07:39:19 +0000 UTC (1 container statuses recorded) +Sep 7 08:25:24.148: INFO: Container kube-sonobuoy ready: true, restart count 0 +Sep 7 08:25:24.148: INFO: sonobuoy-e2e-job-2f855b96e04a42ee from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 08:25:24.148: INFO: Container e2e ready: true, restart count 0 +Sep 7 08:25:24.148: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 08:25:24.148: INFO: sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-kstch from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 08:25:24.148: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 08:25:24.148: INFO: Container systemd-logs ready: true, restart count 0 +Sep 7 08:25:24.148: INFO: +Logging pods the apiserver thinks is on node 172.31.51.97 before test +Sep 7 08:25:24.154: INFO: calico-kube-controllers-5c8bb696bb-tvl2c from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 08:25:24.154: INFO: Container calico-kube-controllers ready: true, restart count 0 +Sep 7 08:25:24.154: INFO: calico-node-d87kb from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 08:25:24.154: INFO: Container calico-node ready: true, restart count 0 +Sep 7 08:25:24.154: INFO: coredns-84b58f6b4-xcj7z from kube-system started at 2022-09-07 07:27:41 +0000 UTC (1 container statuses recorded) +Sep 7 08:25:24.154: INFO: Container coredns ready: true, restart count 0 +Sep 7 08:25:24.154: INFO: dashboard-metrics-scraper-864d79d497-bchwd from kube-system started at 2022-09-07 07:27:46 +0000 UTC (1 container statuses recorded) +Sep 7 08:25:24.154: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Sep 7 08:25:24.154: INFO: kubernetes-dashboard-5fc74cf5c6-bsp7p from kube-system started at 2022-09-07 07:27:46 +0000 UTC (1 container statuses recorded) +Sep 7 08:25:24.154: INFO: Container kubernetes-dashboard ready: true, restart count 0 +Sep 7 08:25:24.154: INFO: metrics-server-69797698d4-hndhm from kube-system started at 2022-09-07 07:27:43 +0000 UTC (1 container statuses recorded) +Sep 7 08:25:24.154: INFO: Container metrics-server ready: true, restart count 0 +Sep 7 08:25:24.154: INFO: node-local-dns-28994 from kube-system started at 2022-09-07 07:27:42 +0000 UTC (1 container statuses recorded) +Sep 7 08:25:24.154: INFO: Container node-cache ready: true, restart count 0 +Sep 7 08:25:24.154: INFO: sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-svvzn from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 08:25:24.154: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 08:25:24.154: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/framework/framework.go:652 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-ca41dfa3-49d1-40b3-a0f5-18c5ea96e3a9 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.31.51.96 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-ca41dfa3-49d1-40b3-a0f5-18c5ea96e3a9 off the node 172.31.51.96 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-ca41dfa3-49d1-40b3-a0f5-18c5ea96e3a9 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:30:32.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-3436" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 + +• [SLOW TEST:308.260 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":356,"completed":179,"skipped":3565,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:30:32.338: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating service in namespace services-207 +STEP: creating service affinity-clusterip-transition in namespace services-207 +STEP: creating replication controller affinity-clusterip-transition in namespace services-207 +I0907 08:30:32.432763 19 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-207, replica count: 3 +I0907 08:30:35.490319 19 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 08:30:35.507: INFO: Creating new exec pod +Sep 7 08:30:38.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-207 exec execpod-affinity79f7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' +Sep 7 08:30:38.764: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Sep 7 08:30:38.764: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:30:38.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-207 exec execpod-affinity79f7z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.82.105 80' +Sep 7 08:30:38.972: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.82.105 80\nConnection to 10.68.82.105 80 port [tcp/http] succeeded!\n" +Sep 7 08:30:38.972: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:30:38.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-207 exec execpod-affinity79f7z -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.68.82.105:80/ ; done' +Sep 7 08:30:39.495: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n" +Sep 7 08:30:39.495: INFO: stdout: "\naffinity-clusterip-transition-5ltxz\naffinity-clusterip-transition-bmghp\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-5ltxz\naffinity-clusterip-transition-bmghp\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-5ltxz\naffinity-clusterip-transition-bmghp\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-5ltxz\naffinity-clusterip-transition-bmghp\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-5ltxz\naffinity-clusterip-transition-bmghp\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-5ltxz" +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-5ltxz +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-bmghp +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-5ltxz +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-bmghp +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-5ltxz +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-bmghp +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-5ltxz +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-bmghp +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-5ltxz +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-bmghp +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.495: INFO: Received response from host: affinity-clusterip-transition-5ltxz +Sep 7 08:30:39.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-207 exec execpod-affinity79f7z -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.68.82.105:80/ ; done' +Sep 7 08:30:39.965: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.82.105:80/\n" +Sep 7 08:30:39.966: INFO: stdout: "\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc\naffinity-clusterip-transition-2n5hc" +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Received response from host: affinity-clusterip-transition-2n5hc +Sep 7 08:30:39.966: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-207, will wait for the garbage collector to delete the pods +Sep 7 08:30:40.038: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.479838ms +Sep 7 08:30:40.141: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 103.041459ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:30:43.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-207" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:11.108 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":356,"completed":180,"skipped":3636,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:30:43.447: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should provide secure master service [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:30:43.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5218" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":356,"completed":181,"skipped":3682,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:30:43.520: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward api env vars +Sep 7 08:30:43.554: INFO: Waiting up to 5m0s for pod "downward-api-12ae2875-7abf-4bf8-9211-d55459e1381e" in namespace "downward-api-5015" to be "Succeeded or Failed" +Sep 7 08:30:43.563: INFO: Pod "downward-api-12ae2875-7abf-4bf8-9211-d55459e1381e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.513753ms +Sep 7 08:30:45.572: INFO: Pod "downward-api-12ae2875-7abf-4bf8-9211-d55459e1381e": Phase="Running", Reason="", readiness=true. Elapsed: 2.018056433s +Sep 7 08:30:47.584: INFO: Pod "downward-api-12ae2875-7abf-4bf8-9211-d55459e1381e": Phase="Running", Reason="", readiness=false. Elapsed: 4.030657885s +Sep 7 08:30:49.602: INFO: Pod "downward-api-12ae2875-7abf-4bf8-9211-d55459e1381e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048719149s +STEP: Saw pod success +Sep 7 08:30:49.602: INFO: Pod "downward-api-12ae2875-7abf-4bf8-9211-d55459e1381e" satisfied condition "Succeeded or Failed" +Sep 7 08:30:49.607: INFO: Trying to get logs from node 172.31.51.96 pod downward-api-12ae2875-7abf-4bf8-9211-d55459e1381e container dapi-container: +STEP: delete the pod +Sep 7 08:30:49.650: INFO: Waiting for pod downward-api-12ae2875-7abf-4bf8-9211-d55459e1381e to disappear +Sep 7 08:30:49.653: INFO: Pod downward-api-12ae2875-7abf-4bf8-9211-d55459e1381e no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:188 +Sep 7 08:30:49.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5015" for this suite. + +• [SLOW TEST:6.142 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":356,"completed":182,"skipped":3715,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:30:49.663: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:30:49.712: INFO: Endpoints addresses: [172.31.51.96] , ports: [6443] +Sep 7 08:30:49.712: INFO: EndpointSlices addresses: [172.31.51.96] , ports: [6443] +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:188 +Sep 7 08:30:49.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-6044" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":356,"completed":183,"skipped":3754,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:30:49.722: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir volume type on tmpfs +Sep 7 08:30:49.763: INFO: Waiting up to 5m0s for pod "pod-f2f230d2-a68c-446e-aea7-2317aa20b8c2" in namespace "emptydir-9615" to be "Succeeded or Failed" +Sep 7 08:30:49.784: INFO: Pod "pod-f2f230d2-a68c-446e-aea7-2317aa20b8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.891298ms +Sep 7 08:30:51.789: INFO: Pod "pod-f2f230d2-a68c-446e-aea7-2317aa20b8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025540304s +Sep 7 08:30:53.798: INFO: Pod "pod-f2f230d2-a68c-446e-aea7-2317aa20b8c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034220064s +STEP: Saw pod success +Sep 7 08:30:53.798: INFO: Pod "pod-f2f230d2-a68c-446e-aea7-2317aa20b8c2" satisfied condition "Succeeded or Failed" +Sep 7 08:30:53.801: INFO: Trying to get logs from node 172.31.51.96 pod pod-f2f230d2-a68c-446e-aea7-2317aa20b8c2 container test-container: +STEP: delete the pod +Sep 7 08:30:53.819: INFO: Waiting for pod pod-f2f230d2-a68c-446e-aea7-2317aa20b8c2 to disappear +Sep 7 08:30:53.822: INFO: Pod pod-f2f230d2-a68c-446e-aea7-2317aa20b8c2 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:30:53.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9615" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":184,"skipped":3781,"failed":0} +S +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:30:53.832: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name configmap-test-volume-map-14ccb536-8b0d-4fd9-87f2-1c07bf6e092b +STEP: Creating a pod to test consume configMaps +Sep 7 08:30:53.888: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b42509f-d041-437b-894c-d3cfadd227ba" in namespace "configmap-4957" to be "Succeeded or Failed" +Sep 7 08:30:53.903: INFO: Pod "pod-configmaps-9b42509f-d041-437b-894c-d3cfadd227ba": Phase="Pending", Reason="", readiness=false. Elapsed: 15.08649ms +Sep 7 08:30:55.915: INFO: Pod "pod-configmaps-9b42509f-d041-437b-894c-d3cfadd227ba": Phase="Running", Reason="", readiness=true. Elapsed: 2.02712823s +Sep 7 08:30:57.928: INFO: Pod "pod-configmaps-9b42509f-d041-437b-894c-d3cfadd227ba": Phase="Running", Reason="", readiness=false. Elapsed: 4.040003028s +Sep 7 08:30:59.943: INFO: Pod "pod-configmaps-9b42509f-d041-437b-894c-d3cfadd227ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055154501s +STEP: Saw pod success +Sep 7 08:30:59.943: INFO: Pod "pod-configmaps-9b42509f-d041-437b-894c-d3cfadd227ba" satisfied condition "Succeeded or Failed" +Sep 7 08:30:59.947: INFO: Trying to get logs from node 172.31.51.96 pod pod-configmaps-9b42509f-d041-437b-894c-d3cfadd227ba container agnhost-container: +STEP: delete the pod +Sep 7 08:30:59.965: INFO: Waiting for pod pod-configmaps-9b42509f-d041-437b-894c-d3cfadd227ba to disappear +Sep 7 08:30:59.970: INFO: Pod pod-configmaps-9b42509f-d041-437b-894c-d3cfadd227ba no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 08:30:59.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4957" for this suite. + +• [SLOW TEST:6.147 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":356,"completed":185,"skipped":3782,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:30:59.979: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:188 +Sep 7 08:31:02.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-7245" for this suite. +•{"msg":"PASSED [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":356,"completed":186,"skipped":3821,"failed":0} +SSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:31:02.036: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:297 +[It] should create and stop a replication controller [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a replication controller +Sep 7 08:31:02.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 create -f -' +Sep 7 08:31:02.322: INFO: stderr: "" +Sep 7 08:31:02.322: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Sep 7 08:31:02.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Sep 7 08:31:02.466: INFO: stderr: "" +Sep 7 08:31:02.466: INFO: stdout: "update-demo-nautilus-lqqk9 update-demo-nautilus-q98sf " +Sep 7 08:31:02.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get pods update-demo-nautilus-lqqk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 08:31:02.632: INFO: stderr: "" +Sep 7 08:31:02.632: INFO: stdout: "" +Sep 7 08:31:02.632: INFO: update-demo-nautilus-lqqk9 is created but not running +Sep 7 08:31:07.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Sep 7 08:31:08.089: INFO: stderr: "" +Sep 7 08:31:08.089: INFO: stdout: "update-demo-nautilus-lqqk9 update-demo-nautilus-q98sf " +Sep 7 08:31:08.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get pods update-demo-nautilus-lqqk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 08:31:09.135: INFO: stderr: "" +Sep 7 08:31:09.135: INFO: stdout: "" +Sep 7 08:31:09.135: INFO: update-demo-nautilus-lqqk9 is created but not running +Sep 7 08:31:14.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Sep 7 08:31:14.246: INFO: stderr: "" +Sep 7 08:31:14.246: INFO: stdout: "update-demo-nautilus-lqqk9 update-demo-nautilus-q98sf " +Sep 7 08:31:14.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get pods update-demo-nautilus-lqqk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 08:31:14.338: INFO: stderr: "" +Sep 7 08:31:14.338: INFO: stdout: "true" +Sep 7 08:31:14.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get pods update-demo-nautilus-lqqk9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Sep 7 08:31:14.433: INFO: stderr: "" +Sep 7 08:31:14.433: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Sep 7 08:31:14.433: INFO: validating pod update-demo-nautilus-lqqk9 +Sep 7 08:31:14.437: INFO: got data: { + "image": "nautilus.jpg" +} + +Sep 7 08:31:14.438: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Sep 7 08:31:14.438: INFO: update-demo-nautilus-lqqk9 is verified up and running +Sep 7 08:31:14.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get pods update-demo-nautilus-q98sf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 08:31:14.537: INFO: stderr: "" +Sep 7 08:31:14.537: INFO: stdout: "true" +Sep 7 08:31:14.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get pods update-demo-nautilus-q98sf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Sep 7 08:31:14.626: INFO: stderr: "" +Sep 7 08:31:14.626: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Sep 7 08:31:14.626: INFO: validating pod update-demo-nautilus-q98sf +Sep 7 08:31:14.631: INFO: got data: { + "image": "nautilus.jpg" +} + +Sep 7 08:31:14.631: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Sep 7 08:31:14.631: INFO: update-demo-nautilus-q98sf is verified up and running +STEP: using delete to clean up resources +Sep 7 08:31:14.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 delete --grace-period=0 --force -f -' +Sep 7 08:31:14.841: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Sep 7 08:31:14.841: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Sep 7 08:31:14.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get rc,svc -l name=update-demo --no-headers' +Sep 7 08:31:15.067: INFO: stderr: "No resources found in kubectl-5115 namespace.\n" +Sep 7 08:31:15.067: INFO: stdout: "" +Sep 7 08:31:15.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5115 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Sep 7 08:31:15.198: INFO: stderr: "" +Sep 7 08:31:15.198: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:31:15.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5115" for this suite. + +• [SLOW TEST:13.173 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Update Demo + test/e2e/kubectl/kubectl.go:295 + should create and stop a replication controller [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":356,"completed":187,"skipped":3826,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:31:15.210: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[BeforeEach] Kubectl label + test/e2e/kubectl/kubectl.go:1334 +STEP: creating the pod +Sep 7 08:31:15.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6521 create -f -' +Sep 7 08:31:15.682: INFO: stderr: "" +Sep 7 08:31:15.682: INFO: stdout: "pod/pause created\n" +Sep 7 08:31:15.682: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Sep 7 08:31:15.682: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6521" to be "running and ready" +Sep 7 08:31:15.693: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.430016ms +Sep 7 08:31:17.714: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.032683191s +Sep 7 08:31:17.714: INFO: Pod "pause" satisfied condition "running and ready" +Sep 7 08:31:17.714: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + test/e2e/framework/framework.go:652 +STEP: adding the label testing-label with value testing-label-value to a pod +Sep 7 08:31:17.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6521 label pods pause testing-label=testing-label-value' +Sep 7 08:31:17.856: INFO: stderr: "" +Sep 7 08:31:17.856: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Sep 7 08:31:17.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6521 get pod pause -L testing-label' +Sep 7 08:31:17.957: INFO: stderr: "" +Sep 7 08:31:17.958: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod +Sep 7 08:31:17.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6521 label pods pause testing-label-' +Sep 7 08:31:18.102: INFO: stderr: "" +Sep 7 08:31:18.102: INFO: stdout: "pod/pause unlabeled\n" +STEP: verifying the pod doesn't have the label testing-label +Sep 7 08:31:18.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6521 get pod pause -L testing-label' +Sep 7 08:31:18.210: INFO: stderr: "" +Sep 7 08:31:18.210: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" +[AfterEach] Kubectl label + test/e2e/kubectl/kubectl.go:1340 +STEP: using delete to clean up resources +Sep 7 08:31:18.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6521 delete --grace-period=0 --force -f -' +Sep 7 08:31:18.331: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Sep 7 08:31:18.331: INFO: stdout: "pod \"pause\" force deleted\n" +Sep 7 08:31:18.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6521 get rc,svc -l name=pause --no-headers' +Sep 7 08:31:18.447: INFO: stderr: "No resources found in kubectl-6521 namespace.\n" +Sep 7 08:31:18.447: INFO: stdout: "" +Sep 7 08:31:18.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6521 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Sep 7 08:31:18.555: INFO: stderr: "" +Sep 7 08:31:18.555: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:31:18.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6521" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":356,"completed":188,"skipped":3894,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:31:18.566: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: expected 0 pods, got 2 pods +STEP: Gathering metrics +Sep 7 08:31:19.691: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:188 +Sep 7 08:31:19.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W0907 08:31:19.691610 19 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-4664" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":356,"completed":189,"skipped":3927,"failed":0} + +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:31:19.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-445.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-445.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-445.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-445.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-445.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Sep 7 08:31:23.833: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:23.840: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:23.845: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:23.849: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:23.852: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:23.855: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:23.858: INFO: Unable to read jessie_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:23.861: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:23.861: INFO: Lookups using dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local] + +Sep 7 08:31:28.867: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:28.871: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:28.874: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:28.877: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:28.879: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:28.882: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:28.887: INFO: Unable to read jessie_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:28.894: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:28.894: INFO: Lookups using dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local] + +Sep 7 08:31:33.867: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:33.870: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:33.872: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:33.874: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:33.877: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:33.879: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:33.881: INFO: Unable to read jessie_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:33.884: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:33.884: INFO: Lookups using dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local] + +Sep 7 08:31:38.866: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:38.870: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:38.873: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:38.876: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:38.878: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:38.881: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:38.883: INFO: Unable to read jessie_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:38.885: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:38.885: INFO: Lookups using dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local] + +Sep 7 08:31:43.867: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:43.870: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:43.872: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:43.874: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:43.877: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:43.879: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:43.881: INFO: Unable to read jessie_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:43.883: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:43.883: INFO: Lookups using dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local] + +Sep 7 08:31:48.867: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:48.871: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:48.875: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:48.881: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:48.884: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:48.889: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:48.902: INFO: Unable to read jessie_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:48.915: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:48.915: INFO: Lookups using dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local] + +Sep 7 08:31:53.867: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:53.870: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:53.872: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:53.876: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:53.879: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:53.882: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:53.885: INFO: Unable to read jessie_udp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:53.888: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local from pod dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0: the server could not find the requested resource (get pods dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0) +Sep 7 08:31:53.888: INFO: Lookups using dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local wheezy_udp@dns-test-service-2.dns-445.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-445.svc.cluster.local jessie_udp@dns-test-service-2.dns-445.svc.cluster.local jessie_tcp@dns-test-service-2.dns-445.svc.cluster.local] + +Sep 7 08:31:58.883: INFO: DNS probes using dns-445/dns-test-b829ea6a-0665-4ccd-862e-2389779e06c0 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:188 +Sep 7 08:31:58.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-445" for this suite. + +• [SLOW TEST:39.323 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for pods for Subdomain [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":356,"completed":190,"skipped":3927,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:31:59.041: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Sep 7 08:31:59.149: INFO: Waiting up to 5m0s for pod "pod-6b3f1a29-822e-4c03-a043-3d7d631d6e8f" in namespace "emptydir-2044" to be "Succeeded or Failed" +Sep 7 08:31:59.202: INFO: Pod "pod-6b3f1a29-822e-4c03-a043-3d7d631d6e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 53.54208ms +Sep 7 08:32:01.206: INFO: Pod "pod-6b3f1a29-822e-4c03-a043-3d7d631d6e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057692826s +Sep 7 08:32:03.216: INFO: Pod "pod-6b3f1a29-822e-4c03-a043-3d7d631d6e8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067677277s +STEP: Saw pod success +Sep 7 08:32:03.216: INFO: Pod "pod-6b3f1a29-822e-4c03-a043-3d7d631d6e8f" satisfied condition "Succeeded or Failed" +Sep 7 08:32:03.219: INFO: Trying to get logs from node 172.31.51.96 pod pod-6b3f1a29-822e-4c03-a043-3d7d631d6e8f container test-container: +STEP: delete the pod +Sep 7 08:32:03.235: INFO: Waiting for pod pod-6b3f1a29-822e-4c03-a043-3d7d631d6e8f to disappear +Sep 7 08:32:03.239: INFO: Pod pod-6b3f1a29-822e-4c03-a043-3d7d631d6e8f no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:32:03.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2044" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":191,"skipped":3942,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:32:03.247: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for multiple CRDs of same group but different versions [Conformance] + test/e2e/framework/framework.go:652 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation +Sep 7 08:32:03.277: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation +Sep 7 08:32:18.907: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:32:21.917: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:32:35.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7030" for this suite. + +• [SLOW TEST:31.896 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group but different versions [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":356,"completed":192,"skipped":3996,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:32:35.143: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:61 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod liveness-e1121e05-8b39-41bc-9f73-39028e4c0027 in namespace container-probe-3368 +Sep 7 08:32:37.223: INFO: Started pod liveness-e1121e05-8b39-41bc-9f73-39028e4c0027 in namespace container-probe-3368 +STEP: checking the pod's current state and verifying that restartCount is present +Sep 7 08:32:37.226: INFO: Initial restart count of pod liveness-e1121e05-8b39-41bc-9f73-39028e4c0027 is 0 +Sep 7 08:32:57.337: INFO: Restart count of pod container-probe-3368/liveness-e1121e05-8b39-41bc-9f73-39028e4c0027 is now 1 (20.110858399s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:188 +Sep 7 08:32:57.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-3368" for this suite. + +• [SLOW TEST:22.221 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":356,"completed":193,"skipped":4010,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:32:57.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:32:58.693: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +Sep 7 08:33:00.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 32, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 32, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 32, 58, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 32, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-656754656d\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:33:03.747: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:33:03.753: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Creating a v1 custom resource +STEP: v2 custom resource should be converted +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:33:06.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-429" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 + +• [SLOW TEST:9.677 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":356,"completed":194,"skipped":4025,"failed":0} +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:33:07.041: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename runtimeclass +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support RuntimeClasses API operations [Conformance] + test/e2e/framework/framework.go:652 +STEP: getting /apis +STEP: getting /apis/node.k8s.io +STEP: getting /apis/node.k8s.io/v1 +STEP: creating +STEP: watching +Sep 7 08:33:07.155: INFO: starting watch +STEP: getting +STEP: listing +STEP: patching +STEP: updating +Sep 7 08:33:07.185: INFO: waiting for watch events with expected annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:188 +Sep 7 08:33:07.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-9849" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":356,"completed":195,"skipped":4025,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:33:07.226: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 +[It] should observe PodDisruptionBudget status updated [Conformance] + test/e2e/framework/framework.go:652 +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Sep 7 08:33:09.401: INFO: running pods: 0 < 3 +Sep 7 08:33:11.407: INFO: running pods: 0 < 3 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:188 +Sep 7 08:33:13.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-672" for this suite. + +• [SLOW TEST:6.206 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should observe PodDisruptionBudget status updated [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":356,"completed":196,"skipped":4036,"failed":0} +S +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:33:13.432: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:191 +[It] should delete a collection of pods [Conformance] + test/e2e/framework/framework.go:652 +STEP: Create set of pods +Sep 7 08:33:13.489: INFO: created test-pod-1 +Sep 7 08:33:13.494: INFO: created test-pod-2 +Sep 7 08:33:13.498: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be running +Sep 7 08:33:13.498: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-2353' to be running and ready +Sep 7 08:33:13.557: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Sep 7 08:33:13.557: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Sep 7 08:33:13.557: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Sep 7 08:33:13.557: INFO: 0 / 3 pods in namespace 'pods-2353' are running and ready (0 seconds elapsed) +Sep 7 08:33:13.557: INFO: expected 0 pod replicas in namespace 'pods-2353', 0 are Running and Ready. +Sep 7 08:33:13.557: INFO: POD NODE PHASE GRACE CONDITIONS +Sep 7 08:33:13.557: INFO: test-pod-1 172.31.51.97 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:33:13 +0000 UTC }] +Sep 7 08:33:13.557: INFO: test-pod-2 172.31.51.96 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:33:13 +0000 UTC }] +Sep 7 08:33:13.557: INFO: test-pod-3 172.31.51.97 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:33:13 +0000 UTC }] +Sep 7 08:33:13.557: INFO: +Sep 7 08:33:15.591: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Sep 7 08:33:15.591: INFO: 2 / 3 pods in namespace 'pods-2353' are running and ready (2 seconds elapsed) +Sep 7 08:33:15.591: INFO: expected 0 pod replicas in namespace 'pods-2353', 0 are Running and Ready. +Sep 7 08:33:15.591: INFO: POD NODE PHASE GRACE CONDITIONS +Sep 7 08:33:15.591: INFO: test-pod-2 172.31.51.96 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:33:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:33:13 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:33:13 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 08:33:13 +0000 UTC }] +Sep 7 08:33:15.591: INFO: +Sep 7 08:33:17.565: INFO: 3 / 3 pods in namespace 'pods-2353' are running and ready (4 seconds elapsed) +Sep 7 08:33:17.565: INFO: expected 0 pod replicas in namespace 'pods-2353', 0 are Running and Ready. +STEP: waiting for all pods to be deleted +Sep 7 08:33:17.596: INFO: Pod quantity 3 is different from expected quantity 0 +Sep 7 08:33:18.637: INFO: Pod quantity 3 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:188 +Sep 7 08:33:19.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2353" for this suite. + +• [SLOW TEST:6.193 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should delete a collection of pods [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":356,"completed":197,"skipped":4037,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] version v1 + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:33:19.626: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:33:19.690: INFO: Creating pod... +Sep 7 08:33:23.750: INFO: Creating service... +Sep 7 08:33:23.767: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/pods/agnhost/proxy/some/path/with/DELETE +Sep 7 08:33:23.787: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Sep 7 08:33:23.787: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/pods/agnhost/proxy/some/path/with/GET +Sep 7 08:33:23.814: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Sep 7 08:33:23.814: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/pods/agnhost/proxy/some/path/with/HEAD +Sep 7 08:33:23.827: INFO: http.Client request:HEAD | StatusCode:200 +Sep 7 08:33:23.827: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/pods/agnhost/proxy/some/path/with/OPTIONS +Sep 7 08:33:23.832: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Sep 7 08:33:23.833: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/pods/agnhost/proxy/some/path/with/PATCH +Sep 7 08:33:23.839: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Sep 7 08:33:23.839: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/pods/agnhost/proxy/some/path/with/POST +Sep 7 08:33:23.843: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Sep 7 08:33:23.843: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/pods/agnhost/proxy/some/path/with/PUT +Sep 7 08:33:23.846: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Sep 7 08:33:23.846: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/services/test-service/proxy/some/path/with/DELETE +Sep 7 08:33:23.850: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Sep 7 08:33:23.850: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/services/test-service/proxy/some/path/with/GET +Sep 7 08:33:23.854: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Sep 7 08:33:23.854: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/services/test-service/proxy/some/path/with/HEAD +Sep 7 08:33:23.857: INFO: http.Client request:HEAD | StatusCode:200 +Sep 7 08:33:23.857: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/services/test-service/proxy/some/path/with/OPTIONS +Sep 7 08:33:23.861: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Sep 7 08:33:23.861: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/services/test-service/proxy/some/path/with/PATCH +Sep 7 08:33:23.865: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Sep 7 08:33:23.865: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/services/test-service/proxy/some/path/with/POST +Sep 7 08:33:23.868: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Sep 7 08:33:23.868: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-3690/services/test-service/proxy/some/path/with/PUT +Sep 7 08:33:23.872: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + test/e2e/framework/framework.go:188 +Sep 7 08:33:23.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-3690" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":356,"completed":198,"skipped":4050,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:33:23.883: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:61 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:188 +Sep 7 08:34:24.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1962" for this suite. + +• [SLOW TEST:60.139 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":356,"completed":199,"skipped":4080,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:34:24.022: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir volume type on node default medium +Sep 7 08:34:24.063: INFO: Waiting up to 5m0s for pod "pod-85f3d4cd-5d9f-4da1-87ea-f2f47a097faf" in namespace "emptydir-6740" to be "Succeeded or Failed" +Sep 7 08:34:24.078: INFO: Pod "pod-85f3d4cd-5d9f-4da1-87ea-f2f47a097faf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.084684ms +Sep 7 08:34:26.081: INFO: Pod "pod-85f3d4cd-5d9f-4da1-87ea-f2f47a097faf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018203001s +Sep 7 08:34:28.094: INFO: Pod "pod-85f3d4cd-5d9f-4da1-87ea-f2f47a097faf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031088193s +STEP: Saw pod success +Sep 7 08:34:28.094: INFO: Pod "pod-85f3d4cd-5d9f-4da1-87ea-f2f47a097faf" satisfied condition "Succeeded or Failed" +Sep 7 08:34:28.099: INFO: Trying to get logs from node 172.31.51.96 pod pod-85f3d4cd-5d9f-4da1-87ea-f2f47a097faf container test-container: +STEP: delete the pod +Sep 7 08:34:28.128: INFO: Waiting for pod pod-85f3d4cd-5d9f-4da1-87ea-f2f47a097faf to disappear +Sep 7 08:34:28.136: INFO: Pod pod-85f3d4cd-5d9f-4da1-87ea-f2f47a097faf no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:34:28.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6740" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":200,"skipped":4087,"failed":0} +SSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:34:28.144: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 +Sep 7 08:34:28.193: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Sep 7 08:34:28.206: INFO: Waiting for terminating namespaces to be deleted... +Sep 7 08:34:28.209: INFO: +Logging pods the apiserver thinks is on node 172.31.51.96 before test +Sep 7 08:34:28.214: INFO: test-webserver-81375cf3-537a-4828-855c-0c51e3709a34 from container-probe-1962 started at 2022-09-07 08:33:24 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.214: INFO: Container test-webserver ready: false, restart count 0 +Sep 7 08:34:28.214: INFO: calico-node-g8tpr from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.214: INFO: Container calico-node ready: true, restart count 0 +Sep 7 08:34:28.214: INFO: node-local-dns-8rwpt from kube-system started at 2022-09-07 07:27:42 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.214: INFO: Container node-cache ready: true, restart count 0 +Sep 7 08:34:28.214: INFO: sonobuoy from sonobuoy started at 2022-09-07 07:39:19 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.214: INFO: Container kube-sonobuoy ready: true, restart count 0 +Sep 7 08:34:28.214: INFO: sonobuoy-e2e-job-2f855b96e04a42ee from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 08:34:28.214: INFO: Container e2e ready: true, restart count 0 +Sep 7 08:34:28.214: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 08:34:28.214: INFO: sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-kstch from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 08:34:28.214: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 08:34:28.214: INFO: Container systemd-logs ready: true, restart count 0 +Sep 7 08:34:28.214: INFO: +Logging pods the apiserver thinks is on node 172.31.51.97 before test +Sep 7 08:34:28.229: INFO: calico-kube-controllers-5c8bb696bb-tvl2c from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.229: INFO: Container calico-kube-controllers ready: true, restart count 0 +Sep 7 08:34:28.229: INFO: calico-node-d87kb from kube-system started at 2022-09-07 07:27:16 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.229: INFO: Container calico-node ready: true, restart count 0 +Sep 7 08:34:28.229: INFO: coredns-84b58f6b4-xcj7z from kube-system started at 2022-09-07 07:27:41 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.229: INFO: Container coredns ready: true, restart count 0 +Sep 7 08:34:28.229: INFO: dashboard-metrics-scraper-864d79d497-bchwd from kube-system started at 2022-09-07 07:27:46 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.229: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 +Sep 7 08:34:28.229: INFO: kubernetes-dashboard-5fc74cf5c6-bsp7p from kube-system started at 2022-09-07 07:27:46 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.229: INFO: Container kubernetes-dashboard ready: true, restart count 0 +Sep 7 08:34:28.229: INFO: metrics-server-69797698d4-hndhm from kube-system started at 2022-09-07 07:27:43 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.229: INFO: Container metrics-server ready: true, restart count 0 +Sep 7 08:34:28.229: INFO: node-local-dns-28994 from kube-system started at 2022-09-07 07:27:42 +0000 UTC (1 container statuses recorded) +Sep 7 08:34:28.229: INFO: Container node-cache ready: true, restart count 0 +Sep 7 08:34:28.229: INFO: sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-svvzn from sonobuoy started at 2022-09-07 07:39:27 +0000 UTC (2 container statuses recorded) +Sep 7 08:34:28.229: INFO: Container sonobuoy-worker ready: true, restart count 0 +Sep 7 08:34:28.229: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + test/e2e/framework/framework.go:652 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.171287050ef9425e], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:34:29.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-5996" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":356,"completed":201,"skipped":4091,"failed":0} +S +------------------------------ +[sig-apps] Job + should apply changes to a job status [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:34:29.270: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should apply changes to a job status [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a job +STEP: Ensure pods equal to paralellism count is attached to the job +STEP: patching /status +STEP: updating /status +STEP: get /status +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:188 +Sep 7 08:34:33.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-169" for this suite. +•{"msg":"PASSED [sig-apps] Job should apply changes to a job status [Conformance]","total":356,"completed":202,"skipped":4092,"failed":0} +SS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:34:33.385: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to create a functioning NodePort service [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating service nodeport-test with type=NodePort in namespace services-7337 +STEP: creating replication controller nodeport-test in namespace services-7337 +I0907 08:34:33.459685 19 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-7337, replica count: 2 +Sep 7 08:34:36.511: INFO: Creating new exec pod +I0907 08:34:36.511569 19 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 08:34:39.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-7337 exec execpodmskwv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Sep 7 08:34:39.775: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Sep 7 08:34:39.775: INFO: stdout: "" +Sep 7 08:34:40.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-7337 exec execpodmskwv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Sep 7 08:34:40.964: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Sep 7 08:34:40.964: INFO: stdout: "nodeport-test-phx7f" +Sep 7 08:34:40.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-7337 exec execpodmskwv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.86.243 80' +Sep 7 08:34:41.138: INFO: stderr: "+ nc -v -t -w 2 10.68.86.243 80\nConnection to 10.68.86.243 80 port [tcp/http] succeeded!\n+ echo hostName\n" +Sep 7 08:34:41.138: INFO: stdout: "nodeport-test-qt4gt" +Sep 7 08:34:41.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-7337 exec execpodmskwv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.96 30047' +Sep 7 08:34:41.317: INFO: stderr: "+ nc -v -t -w 2 172.31.51.96 30047\nConnection to 172.31.51.96 30047 port [tcp/*] succeeded!\n+ echo hostName\n" +Sep 7 08:34:41.317: INFO: stdout: "nodeport-test-qt4gt" +Sep 7 08:34:41.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-7337 exec execpodmskwv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.97 30047' +Sep 7 08:34:41.532: INFO: stderr: "+ nc -v -t -w 2 172.31.51.97 30047\n+ echo hostName\nConnection to 172.31.51.97 30047 port [tcp/*] succeeded!\n" +Sep 7 08:34:41.532: INFO: stdout: "nodeport-test-qt4gt" +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:34:41.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7337" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:8.161 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to create a functioning NodePort service [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":356,"completed":203,"skipped":4094,"failed":0} +SSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:34:41.545: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:34:41.605: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Sep 7 08:34:41.627: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:34:41.627: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:34:42.641: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:34:42.641: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:34:43.638: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:34:43.638: INFO: Node 172.31.51.97 is running 0 daemon pod, expected 1 +Sep 7 08:34:44.638: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Sep 7 08:34:44.638: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Sep 7 08:34:44.678: INFO: Wrong image for pod: daemon-set-9wx8c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Sep 7 08:34:44.678: INFO: Wrong image for pod: daemon-set-vsrb6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Sep 7 08:34:45.713: INFO: Wrong image for pod: daemon-set-9wx8c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Sep 7 08:34:46.751: INFO: Wrong image for pod: daemon-set-9wx8c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Sep 7 08:34:46.751: INFO: Pod daemon-set-plfq6 is not available +Sep 7 08:34:47.723: INFO: Wrong image for pod: daemon-set-9wx8c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Sep 7 08:34:47.723: INFO: Pod daemon-set-plfq6 is not available +Sep 7 08:34:48.730: INFO: Wrong image for pod: daemon-set-9wx8c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Sep 7 08:34:48.730: INFO: Pod daemon-set-plfq6 is not available +Sep 7 08:34:49.767: INFO: Wrong image for pod: daemon-set-9wx8c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Sep 7 08:34:49.767: INFO: Pod daemon-set-plfq6 is not available +Sep 7 08:34:50.788: INFO: Wrong image for pod: daemon-set-9wx8c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Sep 7 08:34:50.789: INFO: Pod daemon-set-plfq6 is not available +Sep 7 08:34:53.725: INFO: Pod daemon-set-6jf45 is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Sep 7 08:34:53.742: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:34:53.742: INFO: Node 172.31.51.97 is running 0 daemon pod, expected 1 +Sep 7 08:34:54.751: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Sep 7 08:34:54.751: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5332, will wait for the garbage collector to delete the pods +Sep 7 08:34:54.821: INFO: Deleting DaemonSet.extensions daemon-set took: 5.877466ms +Sep 7 08:34:54.922: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.214603ms +Sep 7 08:34:59.133: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:34:59.133: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Sep 7 08:34:59.137: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"19514"},"items":null} + +Sep 7 08:34:59.139: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"19514"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:34:59.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-5332" for this suite. + +• [SLOW TEST:17.609 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":356,"completed":204,"skipped":4100,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:34:59.154: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:35:00.062: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:35:03.083: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + test/e2e/framework/framework.go:652 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:35:03.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-883" for this suite. +STEP: Destroying namespace "webhook-883-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":356,"completed":205,"skipped":4104,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:35:03.331: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name secret-test-000fd5b8-e685-4804-994a-296436074012 +STEP: Creating a pod to test consume secrets +Sep 7 08:35:03.429: INFO: Waiting up to 5m0s for pod "pod-secrets-8fba3982-a4bc-4423-9222-b03275b2aa21" in namespace "secrets-9618" to be "Succeeded or Failed" +Sep 7 08:35:03.452: INFO: Pod "pod-secrets-8fba3982-a4bc-4423-9222-b03275b2aa21": Phase="Pending", Reason="", readiness=false. Elapsed: 22.423366ms +Sep 7 08:35:05.467: INFO: Pod "pod-secrets-8fba3982-a4bc-4423-9222-b03275b2aa21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038024459s +Sep 7 08:35:07.477: INFO: Pod "pod-secrets-8fba3982-a4bc-4423-9222-b03275b2aa21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048027478s +STEP: Saw pod success +Sep 7 08:35:07.477: INFO: Pod "pod-secrets-8fba3982-a4bc-4423-9222-b03275b2aa21" satisfied condition "Succeeded or Failed" +Sep 7 08:35:07.481: INFO: Trying to get logs from node 172.31.51.97 pod pod-secrets-8fba3982-a4bc-4423-9222-b03275b2aa21 container secret-env-test: +STEP: delete the pod +Sep 7 08:35:07.534: INFO: Waiting for pod pod-secrets-8fba3982-a4bc-4423-9222-b03275b2aa21 to disappear +Sep 7 08:35:07.538: INFO: Pod pod-secrets-8fba3982-a4bc-4423-9222-b03275b2aa21 no longer exists +[AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:188 +Sep 7 08:35:07.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9618" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":356,"completed":206,"skipped":4114,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:35:07.547: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test override all +Sep 7 08:35:07.604: INFO: Waiting up to 5m0s for pod "client-containers-0989920c-3779-4eb4-a743-0dc79d6b3a89" in namespace "containers-5352" to be "Succeeded or Failed" +Sep 7 08:35:07.607: INFO: Pod "client-containers-0989920c-3779-4eb4-a743-0dc79d6b3a89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.921957ms +Sep 7 08:35:09.627: INFO: Pod "client-containers-0989920c-3779-4eb4-a743-0dc79d6b3a89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022875781s +Sep 7 08:35:11.634: INFO: Pod "client-containers-0989920c-3779-4eb4-a743-0dc79d6b3a89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029640514s +Sep 7 08:35:13.647: INFO: Pod "client-containers-0989920c-3779-4eb4-a743-0dc79d6b3a89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042761342s +STEP: Saw pod success +Sep 7 08:35:13.647: INFO: Pod "client-containers-0989920c-3779-4eb4-a743-0dc79d6b3a89" satisfied condition "Succeeded or Failed" +Sep 7 08:35:13.660: INFO: Trying to get logs from node 172.31.51.96 pod client-containers-0989920c-3779-4eb4-a743-0dc79d6b3a89 container agnhost-container: +STEP: delete the pod +Sep 7 08:35:13.701: INFO: Waiting for pod client-containers-0989920c-3779-4eb4-a743-0dc79d6b3a89 to disappear +Sep 7 08:35:13.705: INFO: Pod client-containers-0989920c-3779-4eb4-a743-0dc79d6b3a89 no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:188 +Sep 7 08:35:13.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-5352" for this suite. + +• [SLOW TEST:6.177 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":356,"completed":207,"skipped":4121,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:35:13.725: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should rollback without unnecessary restarts [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:35:13.792: INFO: Create a RollingUpdate DaemonSet +Sep 7 08:35:13.798: INFO: Check that daemon pods launch on every node of the cluster +Sep 7 08:35:13.813: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:35:13.813: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:35:14.842: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:35:14.842: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 08:35:15.871: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Sep 7 08:35:15.871: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +Sep 7 08:35:15.871: INFO: Update the DaemonSet to trigger a rollout +Sep 7 08:35:15.893: INFO: Updating DaemonSet daemon-set +Sep 7 08:35:18.991: INFO: Roll back the DaemonSet before rollout is complete +Sep 7 08:35:19.014: INFO: Updating DaemonSet daemon-set +Sep 7 08:35:19.014: INFO: Make sure DaemonSet rollback is complete +Sep 7 08:35:19.024: INFO: Wrong image for pod: daemon-set-fx998. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2, got: foo:non-existent. +Sep 7 08:35:19.024: INFO: Pod daemon-set-fx998 is not available +Sep 7 08:35:23.040: INFO: Pod daemon-set-glf68 is not available +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8669, will wait for the garbage collector to delete the pods +Sep 7 08:35:23.110: INFO: Deleting DaemonSet.extensions daemon-set took: 6.377979ms +Sep 7 08:35:23.210: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.176137ms +Sep 7 08:35:25.917: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:35:25.917: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Sep 7 08:35:25.919: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"19854"},"items":null} + +Sep 7 08:35:25.921: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"19854"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:35:25.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-8669" for this suite. + +• [SLOW TEST:12.210 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should rollback without unnecessary restarts [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":356,"completed":208,"skipped":4146,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:35:25.935: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:48 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:35:25.974: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9fa080ae-b382-498d-bd33-3c4b9a2ed881" in namespace "security-context-test-5031" to be "Succeeded or Failed" +Sep 7 08:35:25.983: INFO: Pod "busybox-user-65534-9fa080ae-b382-498d-bd33-3c4b9a2ed881": Phase="Pending", Reason="", readiness=false. Elapsed: 9.075486ms +Sep 7 08:35:27.996: INFO: Pod "busybox-user-65534-9fa080ae-b382-498d-bd33-3c4b9a2ed881": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022068066s +Sep 7 08:35:30.020: INFO: Pod "busybox-user-65534-9fa080ae-b382-498d-bd33-3c4b9a2ed881": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046077207s +Sep 7 08:35:30.020: INFO: Pod "busybox-user-65534-9fa080ae-b382-498d-bd33-3c4b9a2ed881" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:188 +Sep 7 08:35:30.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-5031" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":209,"skipped":4155,"failed":0} + +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:35:30.030: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name configmap-test-volume-df9db426-30cb-4aff-9ef6-01ed253aedaf +STEP: Creating a pod to test consume configMaps +Sep 7 08:35:30.110: INFO: Waiting up to 5m0s for pod "pod-configmaps-92f0c51d-eb90-4083-b9f9-a1252c5d1f1a" in namespace "configmap-8491" to be "Succeeded or Failed" +Sep 7 08:35:30.141: INFO: Pod "pod-configmaps-92f0c51d-eb90-4083-b9f9-a1252c5d1f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.848704ms +Sep 7 08:35:32.151: INFO: Pod "pod-configmaps-92f0c51d-eb90-4083-b9f9-a1252c5d1f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041229406s +Sep 7 08:35:34.164: INFO: Pod "pod-configmaps-92f0c51d-eb90-4083-b9f9-a1252c5d1f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053886127s +Sep 7 08:35:36.169: INFO: Pod "pod-configmaps-92f0c51d-eb90-4083-b9f9-a1252c5d1f1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059209793s +STEP: Saw pod success +Sep 7 08:35:36.169: INFO: Pod "pod-configmaps-92f0c51d-eb90-4083-b9f9-a1252c5d1f1a" satisfied condition "Succeeded or Failed" +Sep 7 08:35:36.174: INFO: Trying to get logs from node 172.31.51.96 pod pod-configmaps-92f0c51d-eb90-4083-b9f9-a1252c5d1f1a container agnhost-container: +STEP: delete the pod +Sep 7 08:35:36.195: INFO: Waiting for pod pod-configmaps-92f0c51d-eb90-4083-b9f9-a1252c5d1f1a to disappear +Sep 7 08:35:36.200: INFO: Pod pod-configmaps-92f0c51d-eb90-4083-b9f9-a1252c5d1f1a no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 08:35:36.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8491" for this suite. + +• [SLOW TEST:6.180 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":356,"completed":210,"skipped":4155,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:35:36.210: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[BeforeEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1412 +STEP: creating an pod +Sep 7 08:35:36.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-934 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.39 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Sep 7 08:35:36.390: INFO: stderr: "" +Sep 7 08:35:36.390: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + test/e2e/framework/framework.go:652 +STEP: Waiting for log generator to start. +Sep 7 08:35:36.390: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Sep 7 08:35:36.390: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-934" to be "running and ready, or succeeded" +Sep 7 08:35:36.404: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 13.906515ms +Sep 7 08:35:38.416: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.02578887s +Sep 7 08:35:38.416: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Sep 7 08:35:38.416: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings +Sep 7 08:35:38.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-934 logs logs-generator logs-generator' +Sep 7 08:35:38.530: INFO: stderr: "" +Sep 7 08:35:38.530: INFO: stdout: "I0907 08:35:37.609181 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/26lp 249\nI0907 08:35:37.809215 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/bvl 489\nI0907 08:35:38.011563 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/67s 407\nI0907 08:35:38.209848 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/zft 263\nI0907 08:35:38.409564 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/qvn8 288\n" +STEP: limiting log lines +Sep 7 08:35:38.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-934 logs logs-generator logs-generator --tail=1' +Sep 7 08:35:38.639: INFO: stderr: "" +Sep 7 08:35:38.639: INFO: stdout: "I0907 08:35:38.613340 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/pwz 334\n" +Sep 7 08:35:38.639: INFO: got output "I0907 08:35:38.613340 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/pwz 334\n" +STEP: limiting log bytes +Sep 7 08:35:38.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-934 logs logs-generator logs-generator --limit-bytes=1' +Sep 7 08:35:38.836: INFO: stderr: "" +Sep 7 08:35:38.836: INFO: stdout: "I" +Sep 7 08:35:38.836: INFO: got output "I" +STEP: exposing timestamps +Sep 7 08:35:38.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-934 logs logs-generator logs-generator --tail=1 --timestamps' +Sep 7 08:35:39.100: INFO: stderr: "" +Sep 7 08:35:39.100: INFO: stdout: "2022-09-07T16:35:39.020742583+08:00 I0907 08:35:39.020284 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/6dh 507\n" +Sep 7 08:35:39.100: INFO: got output "2022-09-07T16:35:39.020742583+08:00 I0907 08:35:39.020284 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/6dh 507\n" +STEP: restricting to a time range +Sep 7 08:35:41.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-934 logs logs-generator logs-generator --since=1s' +Sep 7 08:35:41.699: INFO: stderr: "" +Sep 7 08:35:41.699: INFO: stdout: "I0907 08:35:40.809937 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/frb 410\nI0907 08:35:41.010141 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/hq78 403\nI0907 08:35:41.209424 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/k76 355\nI0907 08:35:41.410184 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/2tl4 392\nI0907 08:35:41.609508 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/9c7d 540\n" +Sep 7 08:35:41.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-934 logs logs-generator logs-generator --since=24h' +Sep 7 08:35:41.803: INFO: stderr: "" +Sep 7 08:35:41.803: INFO: stdout: "I0907 08:35:37.609181 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/26lp 249\nI0907 08:35:37.809215 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/bvl 489\nI0907 08:35:38.011563 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/67s 407\nI0907 08:35:38.209848 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/zft 263\nI0907 08:35:38.409564 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/qvn8 288\nI0907 08:35:38.613340 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/pwz 334\nI0907 08:35:38.827612 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/c7qt 248\nI0907 08:35:39.020284 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/6dh 507\nI0907 08:35:39.209609 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/9tx 250\nI0907 08:35:39.409964 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/l8l5 333\nI0907 08:35:39.609256 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/sdl 251\nI0907 08:35:39.809273 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/96r 490\nI0907 08:35:40.009629 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/pnk 244\nI0907 08:35:40.210003 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/9942 318\nI0907 08:35:40.409275 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/sc4 505\nI0907 08:35:40.609654 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/wnw6 517\nI0907 08:35:40.809937 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/frb 410\nI0907 08:35:41.010141 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/hq78 403\nI0907 08:35:41.209424 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/k76 355\nI0907 08:35:41.410184 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/2tl4 392\nI0907 08:35:41.609508 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/9c7d 540\n" +[AfterEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1417 +Sep 7 08:35:41.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-934 delete pod logs-generator' +Sep 7 08:35:42.999: INFO: stderr: "" +Sep 7 08:35:42.999: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:35:42.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-934" for this suite. + +• [SLOW TEST:6.802 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl logs + test/e2e/kubectl/kubectl.go:1409 + should be able to retrieve and filter logs [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":356,"completed":211,"skipped":4181,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:35:43.013: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 +Sep 7 08:35:43.070: INFO: Waiting up to 1m0s for all nodes to be ready +Sep 7 08:36:43.152: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:36:43.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:496 +STEP: Finding an available node +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +Sep 7 08:36:45.250: INFO: found a healthy node: 172.31.51.96 +[It] runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:36:53.362: INFO: pods created so far: [1 1 1] +Sep 7 08:36:53.362: INFO: length of pods created so far: 3 +Sep 7 08:36:57.404: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + test/e2e/framework/framework.go:188 +Sep 7 08:37:04.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-9672" for this suite. +[AfterEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:470 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:37:04.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-535" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 + +• [SLOW TEST:81.507 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + PreemptionExecutionPath + test/e2e/scheduling/preemption.go:458 + runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":356,"completed":212,"skipped":4262,"failed":0} +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:37:04.520: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sysctl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl +STEP: Watching for error events or started pod +STEP: Waiting for pod completion +STEP: Checking that the pod succeeded +STEP: Getting logs from the pod +STEP: Checking that the sysctl is actually updated +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:188 +Sep 7 08:37:08.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-475" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":356,"completed":213,"skipped":4262,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:37:08.658: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] Replace and Patch tests [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:37:08.721: INFO: Pod name sample-pod: Found 0 pods out of 1 +Sep 7 08:37:13.730: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Scaling up "test-rs" replicaset +Sep 7 08:37:13.751: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet +Sep 7 08:37:13.762: INFO: observed ReplicaSet test-rs in namespace replicaset-7603 with ReadyReplicas 1, AvailableReplicas 1 +Sep 7 08:37:13.810: INFO: observed ReplicaSet test-rs in namespace replicaset-7603 with ReadyReplicas 1, AvailableReplicas 1 +Sep 7 08:37:13.827: INFO: observed ReplicaSet test-rs in namespace replicaset-7603 with ReadyReplicas 1, AvailableReplicas 1 +Sep 7 08:37:13.850: INFO: observed ReplicaSet test-rs in namespace replicaset-7603 with ReadyReplicas 1, AvailableReplicas 1 +Sep 7 08:37:15.407: INFO: observed ReplicaSet test-rs in namespace replicaset-7603 with ReadyReplicas 2, AvailableReplicas 2 +Sep 7 08:37:15.525: INFO: observed Replicaset test-rs in namespace replicaset-7603 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:188 +Sep 7 08:37:15.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-7603" for this suite. + +• [SLOW TEST:6.891 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + Replace and Patch tests [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":356,"completed":214,"skipped":4270,"failed":0} +SSSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:37:15.549: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:37:15.598: INFO: Got root ca configmap in namespace "svcaccounts-2485" +Sep 7 08:37:15.604: INFO: Deleted root ca configmap in namespace "svcaccounts-2485" +STEP: waiting for a new root ca configmap created +Sep 7 08:37:16.109: INFO: Recreated root ca configmap in namespace "svcaccounts-2485" +Sep 7 08:37:16.118: INFO: Updated root ca configmap in namespace "svcaccounts-2485" +STEP: waiting for the root ca configmap reconciled +Sep 7 08:37:16.625: INFO: Reconciled root ca configmap in namespace "svcaccounts-2485" +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:188 +Sep 7 08:37:16.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-2485" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":356,"completed":215,"skipped":4274,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:37:16.652: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-5400 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-5400 +I0907 08:37:16.775652 19 runners.go:193] Created replication controller with name: externalname-service, namespace: services-5400, replica count: 2 +Sep 7 08:37:19.828: INFO: Creating new exec pod +I0907 08:37:19.828627 19 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 08:37:24.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5400 exec execpodhjkrv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Sep 7 08:37:25.102: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Sep 7 08:37:25.102: INFO: stdout: "externalname-service-wx9jr" +Sep 7 08:37:25.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5400 exec execpodhjkrv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.12.33 80' +Sep 7 08:37:25.285: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.12.33 80\nConnection to 10.68.12.33 80 port [tcp/http] succeeded!\n" +Sep 7 08:37:25.285: INFO: stdout: "externalname-service-vvsx2" +Sep 7 08:37:25.285: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:37:25.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5400" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:8.719 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":356,"completed":216,"skipped":4286,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:37:25.371: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name projected-configmap-test-volume-54bce66a-753e-4f18-8967-c787b2ae20dc +STEP: Creating a pod to test consume configMaps +Sep 7 08:37:25.492: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e8b5b3e0-10e3-4fdc-9992-68a7c6785111" in namespace "projected-7517" to be "Succeeded or Failed" +Sep 7 08:37:25.500: INFO: Pod "pod-projected-configmaps-e8b5b3e0-10e3-4fdc-9992-68a7c6785111": Phase="Pending", Reason="", readiness=false. Elapsed: 7.951258ms +Sep 7 08:37:27.509: INFO: Pod "pod-projected-configmaps-e8b5b3e0-10e3-4fdc-9992-68a7c6785111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016380423s +Sep 7 08:37:29.519: INFO: Pod "pod-projected-configmaps-e8b5b3e0-10e3-4fdc-9992-68a7c6785111": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027055021s +Sep 7 08:37:31.538: INFO: Pod "pod-projected-configmaps-e8b5b3e0-10e3-4fdc-9992-68a7c6785111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046101361s +STEP: Saw pod success +Sep 7 08:37:31.538: INFO: Pod "pod-projected-configmaps-e8b5b3e0-10e3-4fdc-9992-68a7c6785111" satisfied condition "Succeeded or Failed" +Sep 7 08:37:31.546: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-configmaps-e8b5b3e0-10e3-4fdc-9992-68a7c6785111 container agnhost-container: +STEP: delete the pod +Sep 7 08:37:31.610: INFO: Waiting for pod pod-projected-configmaps-e8b5b3e0-10e3-4fdc-9992-68a7c6785111 to disappear +Sep 7 08:37:31.621: INFO: Pod pod-projected-configmaps-e8b5b3e0-10e3-4fdc-9992-68a7c6785111 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:188 +Sep 7 08:37:31.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7517" for this suite. + +• [SLOW TEST:6.274 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":356,"completed":217,"skipped":4313,"failed":0} +SS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:37:31.645: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Sep 7 08:37:31.719: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2270 b55c4ee6-440f-4ce3-9172-990f448a2ac0 20653 0 2022-09-07 08:37:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-09-07 08:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 08:37:31.719: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2270 b55c4ee6-440f-4ce3-9172-990f448a2ac0 20654 0 2022-09-07 08:37:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-09-07 08:37:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Sep 7 08:37:31.732: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2270 b55c4ee6-440f-4ce3-9172-990f448a2ac0 20655 0 2022-09-07 08:37:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-09-07 08:37:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 08:37:31.732: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2270 b55c4ee6-440f-4ce3-9172-990f448a2ac0 20656 0 2022-09-07 08:37:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-09-07 08:37:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:188 +Sep 7 08:37:31.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-2270" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":356,"completed":218,"skipped":4315,"failed":0} +SSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:37:31.739: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating service in namespace services-2146 +STEP: creating service affinity-clusterip in namespace services-2146 +STEP: creating replication controller affinity-clusterip in namespace services-2146 +I0907 08:37:31.791718 19 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-2146, replica count: 3 +I0907 08:37:34.851635 19 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0907 08:37:37.852066 19 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 08:37:37.863: INFO: Creating new exec pod +Sep 7 08:37:40.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2146 exec execpod-affinity2m5nv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' +Sep 7 08:37:41.101: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Sep 7 08:37:41.101: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:37:41.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2146 exec execpod-affinity2m5nv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.206.43 80' +Sep 7 08:37:41.338: INFO: stderr: "+ nc -v -t -w 2 10.68.206.43 80\n+ echo hostName\nConnection to 10.68.206.43 80 port [tcp/http] succeeded!\n" +Sep 7 08:37:41.338: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:37:41.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2146 exec execpod-affinity2m5nv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.68.206.43:80/ ; done' +Sep 7 08:37:41.611: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.206.43:80/\n" +Sep 7 08:37:41.611: INFO: stdout: "\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45\naffinity-clusterip-wsb45" +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Received response from host: affinity-clusterip-wsb45 +Sep 7 08:37:41.611: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-2146, will wait for the garbage collector to delete the pods +Sep 7 08:37:41.691: INFO: Deleting ReplicationController affinity-clusterip took: 7.495178ms +Sep 7 08:37:41.793: INFO: Terminating ReplicationController affinity-clusterip pods took: 101.919077ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:37:45.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2146" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:13.419 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":356,"completed":219,"skipped":4318,"failed":0} +SSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:37:45.159: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-8186 +[It] should validate Statefulset Status endpoints [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating statefulset ss in namespace statefulset-8186 +Sep 7 08:37:45.246: INFO: Found 0 stateful pods, waiting for 1 +Sep 7 08:37:55.261: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label +STEP: Getting /status +Sep 7 08:37:55.287: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status +Sep 7 08:37:55.298: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated +Sep 7 08:37:55.301: INFO: Observed &StatefulSet event: ADDED +Sep 7 08:37:55.301: INFO: Found Statefulset ss in namespace statefulset-8186 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Sep 7 08:37:55.301: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status +Sep 7 08:37:55.301: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Sep 7 08:37:55.310: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched +Sep 7 08:37:55.314: INFO: Observed &StatefulSet event: ADDED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Sep 7 08:37:55.314: INFO: Deleting all statefulset in ns statefulset-8186 +Sep 7 08:37:55.317: INFO: Scaling statefulset ss to 0 +Sep 7 08:38:05.345: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 08:38:05.349: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:188 +Sep 7 08:38:05.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-8186" for this suite. + +• [SLOW TEST:20.255 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + should validate Statefulset Status endpoints [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":356,"completed":220,"skipped":4324,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:38:05.414: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:38:06.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 38, 6, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 38, 6, 0, time.Local), Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-68c7bd4684\""}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:38:09.420: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + test/e2e/framework/framework.go:652 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:38:09.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-538" for this suite. +STEP: Destroying namespace "webhook-538-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":356,"completed":221,"skipped":4344,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:38:09.974: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:38:10.904: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Sep 7 08:38:12.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 38, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 38, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 38, 11, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 38, 10, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:38:16.016: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + test/e2e/framework/framework.go:652 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:38:26.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6892" for this suite. +STEP: Destroying namespace "webhook-6892-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:16.364 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny pod and configmap creation [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":356,"completed":222,"skipped":4406,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:38:26.338: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data +[It] should support subpaths with secret pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod pod-subpath-test-secret-kh2s +STEP: Creating a pod to test atomic-volume-subpath +Sep 7 08:38:26.503: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kh2s" in namespace "subpath-1383" to be "Succeeded or Failed" +Sep 7 08:38:26.512: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.835067ms +Sep 7 08:38:28.530: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026177538s +Sep 7 08:38:30.541: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=true. Elapsed: 4.037695189s +Sep 7 08:38:32.547: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=true. Elapsed: 6.043371328s +Sep 7 08:38:34.559: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=true. Elapsed: 8.055740163s +Sep 7 08:38:36.567: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=true. Elapsed: 10.063697816s +Sep 7 08:38:38.575: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=true. Elapsed: 12.07184123s +Sep 7 08:38:40.589: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=true. Elapsed: 14.08537046s +Sep 7 08:38:42.596: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=true. Elapsed: 16.092271724s +Sep 7 08:38:44.618: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=true. Elapsed: 18.114927074s +Sep 7 08:38:46.626: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=true. Elapsed: 20.122860477s +Sep 7 08:38:48.636: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=true. Elapsed: 22.132298212s +Sep 7 08:38:50.649: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Running", Reason="", readiness=false. Elapsed: 24.145959638s +Sep 7 08:38:52.654: INFO: Pod "pod-subpath-test-secret-kh2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.150995536s +STEP: Saw pod success +Sep 7 08:38:52.654: INFO: Pod "pod-subpath-test-secret-kh2s" satisfied condition "Succeeded or Failed" +Sep 7 08:38:52.658: INFO: Trying to get logs from node 172.31.51.96 pod pod-subpath-test-secret-kh2s container test-container-subpath-secret-kh2s: +STEP: delete the pod +Sep 7 08:38:52.677: INFO: Waiting for pod pod-subpath-test-secret-kh2s to disappear +Sep 7 08:38:52.682: INFO: Pod pod-subpath-test-secret-kh2s no longer exists +STEP: Deleting pod pod-subpath-test-secret-kh2s +Sep 7 08:38:52.682: INFO: Deleting pod "pod-subpath-test-secret-kh2s" in namespace "subpath-1383" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:188 +Sep 7 08:38:52.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-1383" for this suite. + +• [SLOW TEST:26.359 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with secret pod [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]","total":356,"completed":223,"skipped":4423,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:38:52.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:38:52.747: INFO: created pod +Sep 7 08:38:52.747: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-8133" to be "Succeeded or Failed" +Sep 7 08:38:52.763: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 15.56369ms +Sep 7 08:38:54.770: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02293755s +Sep 7 08:38:56.777: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03014844s +STEP: Saw pod success +Sep 7 08:38:56.778: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Sep 7 08:39:26.779: INFO: polling logs +Sep 7 08:39:26.787: INFO: Pod logs: +I0907 08:38:53.794861 1 log.go:195] OK: Got token +I0907 08:38:53.794899 1 log.go:195] validating with in-cluster discovery +I0907 08:38:53.795406 1 log.go:195] OK: got issuer https://kubernetes.default.svc +I0907 08:38:53.795433 1 log.go:195] Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-8133:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1662540532, NotBefore:1662539932, IssuedAt:1662539932, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8133", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"5d68f38c-3f98-4aa4-9d34-585e39eb742e"}}} +I0907 08:38:53.822702 1 log.go:195] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc +I0907 08:38:53.829627 1 log.go:195] OK: Validated signature on JWT +I0907 08:38:53.829722 1 log.go:195] OK: Got valid claims from token! +I0907 08:38:53.829762 1 log.go:195] Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc", Subject:"system:serviceaccount:svcaccounts-8133:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1662540532, NotBefore:1662539932, IssuedAt:1662539932, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8133", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"5d68f38c-3f98-4aa4-9d34-585e39eb742e"}}} + +Sep 7 08:39:26.787: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:188 +Sep 7 08:39:26.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-8133" for this suite. + +• [SLOW TEST:34.108 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":356,"completed":224,"skipped":4449,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:39:26.806: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should run and stop complex daemon [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:39:26.880: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Sep 7 08:39:26.890: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:39:26.890: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Change node label to blue, check that daemon pod is launched. +Sep 7 08:39:26.930: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:39:26.930: INFO: Node 172.31.51.97 is running 0 daemon pod, expected 1 +Sep 7 08:39:27.970: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:39:27.970: INFO: Node 172.31.51.97 is running 0 daemon pod, expected 1 +Sep 7 08:39:28.937: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:39:28.937: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +STEP: Update the node label to green, and wait for daemons to be unscheduled +Sep 7 08:39:28.974: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:39:28.974: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Sep 7 08:39:28.989: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:39:28.989: INFO: Node 172.31.51.97 is running 0 daemon pod, expected 1 +Sep 7 08:39:30.000: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:39:30.000: INFO: Node 172.31.51.97 is running 0 daemon pod, expected 1 +Sep 7 08:39:30.997: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:39:30.997: INFO: Node 172.31.51.97 is running 0 daemon pod, expected 1 +Sep 7 08:39:32.002: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:39:32.002: INFO: Node 172.31.51.97 is running 0 daemon pod, expected 1 +Sep 7 08:39:32.996: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 08:39:32.996: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1465, will wait for the garbage collector to delete the pods +Sep 7 08:39:33.068: INFO: Deleting DaemonSet.extensions daemon-set took: 10.538813ms +Sep 7 08:39:33.169: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.052043ms +Sep 7 08:39:35.779: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 08:39:35.779: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Sep 7 08:39:35.783: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"21408"},"items":null} + +Sep 7 08:39:35.786: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"21408"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:39:35.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-1465" for this suite. + +• [SLOW TEST:9.010 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should run and stop complex daemon [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":356,"completed":225,"skipped":4477,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:39:35.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:39:35.859: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:39:41.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-9300" for this suite. + +• [SLOW TEST:6.083 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + listing custom resource definition objects works [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":356,"completed":226,"skipped":4513,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:39:41.899: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:40 +[BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:84 +[It] should be possible to delete [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:188 +Sep 7 08:39:41.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-4294" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":356,"completed":227,"skipped":4525,"failed":0} +S +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:39:42.045: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating projection with secret that has name projected-secret-test-bd8e36e8-1abb-4331-9f2c-e278299a2b6c +STEP: Creating a pod to test consume secrets +Sep 7 08:39:42.167: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1346f589-0b9f-4b89-adb4-6e829656ea8d" in namespace "projected-3154" to be "Succeeded or Failed" +Sep 7 08:39:42.203: INFO: Pod "pod-projected-secrets-1346f589-0b9f-4b89-adb4-6e829656ea8d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.88364ms +Sep 7 08:39:44.214: INFO: Pod "pod-projected-secrets-1346f589-0b9f-4b89-adb4-6e829656ea8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046873947s +Sep 7 08:39:46.219: INFO: Pod "pod-projected-secrets-1346f589-0b9f-4b89-adb4-6e829656ea8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051728034s +STEP: Saw pod success +Sep 7 08:39:46.219: INFO: Pod "pod-projected-secrets-1346f589-0b9f-4b89-adb4-6e829656ea8d" satisfied condition "Succeeded or Failed" +Sep 7 08:39:46.221: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-secrets-1346f589-0b9f-4b89-adb4-6e829656ea8d container projected-secret-volume-test: +STEP: delete the pod +Sep 7 08:39:46.238: INFO: Waiting for pod pod-projected-secrets-1346f589-0b9f-4b89-adb4-6e829656ea8d to disappear +Sep 7 08:39:46.245: INFO: Pod pod-projected-secrets-1346f589-0b9f-4b89-adb4-6e829656ea8d no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:188 +Sep 7 08:39:46.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3154" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":356,"completed":228,"skipped":4526,"failed":0} +SSSSSS +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:39:46.252: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 +[It] should support creating EndpointSlice API operations [Conformance] + test/e2e/framework/framework.go:652 +STEP: getting /apis +STEP: getting /apis/discovery.k8s.io +STEP: getting /apis/discovery.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Sep 7 08:39:46.306: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Sep 7 08:39:46.309: INFO: starting watch +STEP: patching +STEP: updating +Sep 7 08:39:46.321: INFO: waiting for watch events with expected annotations +Sep 7 08:39:46.321: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:188 +Sep 7 08:39:46.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-1520" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":356,"completed":229,"skipped":4532,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Discovery + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:39:46.346: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename discovery +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Discovery + test/e2e/apimachinery/discovery.go:43 +STEP: Setting up server cert +[It] should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:39:47.763: INFO: Checking APIGroup: apiregistration.k8s.io +Sep 7 08:39:47.765: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Sep 7 08:39:47.765: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Sep 7 08:39:47.765: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Sep 7 08:39:47.765: INFO: Checking APIGroup: apps +Sep 7 08:39:47.770: INFO: PreferredVersion.GroupVersion: apps/v1 +Sep 7 08:39:47.770: INFO: Versions found [{apps/v1 v1}] +Sep 7 08:39:47.770: INFO: apps/v1 matches apps/v1 +Sep 7 08:39:47.770: INFO: Checking APIGroup: events.k8s.io +Sep 7 08:39:47.771: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Sep 7 08:39:47.771: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] +Sep 7 08:39:47.771: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Sep 7 08:39:47.771: INFO: Checking APIGroup: authentication.k8s.io +Sep 7 08:39:47.773: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Sep 7 08:39:47.773: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Sep 7 08:39:47.773: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Sep 7 08:39:47.773: INFO: Checking APIGroup: authorization.k8s.io +Sep 7 08:39:47.780: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Sep 7 08:39:47.780: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Sep 7 08:39:47.780: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Sep 7 08:39:47.780: INFO: Checking APIGroup: autoscaling +Sep 7 08:39:47.783: INFO: PreferredVersion.GroupVersion: autoscaling/v2 +Sep 7 08:39:47.783: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] +Sep 7 08:39:47.783: INFO: autoscaling/v2 matches autoscaling/v2 +Sep 7 08:39:47.783: INFO: Checking APIGroup: batch +Sep 7 08:39:47.786: INFO: PreferredVersion.GroupVersion: batch/v1 +Sep 7 08:39:47.786: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] +Sep 7 08:39:47.786: INFO: batch/v1 matches batch/v1 +Sep 7 08:39:47.786: INFO: Checking APIGroup: certificates.k8s.io +Sep 7 08:39:47.789: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Sep 7 08:39:47.789: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Sep 7 08:39:47.789: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Sep 7 08:39:47.789: INFO: Checking APIGroup: networking.k8s.io +Sep 7 08:39:47.791: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Sep 7 08:39:47.791: INFO: Versions found [{networking.k8s.io/v1 v1}] +Sep 7 08:39:47.791: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Sep 7 08:39:47.791: INFO: Checking APIGroup: policy +Sep 7 08:39:47.795: INFO: PreferredVersion.GroupVersion: policy/v1 +Sep 7 08:39:47.795: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] +Sep 7 08:39:47.795: INFO: policy/v1 matches policy/v1 +Sep 7 08:39:47.795: INFO: Checking APIGroup: rbac.authorization.k8s.io +Sep 7 08:39:47.806: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Sep 7 08:39:47.806: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Sep 7 08:39:47.806: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Sep 7 08:39:47.806: INFO: Checking APIGroup: storage.k8s.io +Sep 7 08:39:47.810: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Sep 7 08:39:47.810: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Sep 7 08:39:47.810: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Sep 7 08:39:47.810: INFO: Checking APIGroup: admissionregistration.k8s.io +Sep 7 08:39:47.813: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Sep 7 08:39:47.813: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Sep 7 08:39:47.813: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Sep 7 08:39:47.813: INFO: Checking APIGroup: apiextensions.k8s.io +Sep 7 08:39:47.818: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Sep 7 08:39:47.818: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Sep 7 08:39:47.818: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Sep 7 08:39:47.818: INFO: Checking APIGroup: scheduling.k8s.io +Sep 7 08:39:47.819: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Sep 7 08:39:47.819: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Sep 7 08:39:47.819: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Sep 7 08:39:47.819: INFO: Checking APIGroup: coordination.k8s.io +Sep 7 08:39:47.820: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Sep 7 08:39:47.820: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Sep 7 08:39:47.820: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Sep 7 08:39:47.820: INFO: Checking APIGroup: node.k8s.io +Sep 7 08:39:47.821: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Sep 7 08:39:47.821: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] +Sep 7 08:39:47.821: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Sep 7 08:39:47.821: INFO: Checking APIGroup: discovery.k8s.io +Sep 7 08:39:47.821: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Sep 7 08:39:47.821: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] +Sep 7 08:39:47.821: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Sep 7 08:39:47.821: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Sep 7 08:39:47.822: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 +Sep 7 08:39:47.822: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] +Sep 7 08:39:47.822: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 +Sep 7 08:39:47.822: INFO: Checking APIGroup: metrics.k8s.io +Sep 7 08:39:47.823: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Sep 7 08:39:47.823: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Sep 7 08:39:47.823: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery + test/e2e/framework/framework.go:188 +Sep 7 08:39:47.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-5273" for this suite. +•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":356,"completed":230,"skipped":4584,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:39:47.829: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/framework/framework.go:652 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5251.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5251.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5251.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5251.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Sep 7 08:39:49.944: INFO: DNS probes using dns-5251/dns-test-c8b60e98-a75e-43ca-b090-0121fd2e7ca3 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:188 +Sep 7 08:39:49.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5251" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]","total":356,"completed":231,"skipped":4645,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:39:49.968: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a test event +STEP: listing all events in all namespaces +STEP: patching the test event +STEP: fetching the test event +STEP: deleting the test event +STEP: listing all events in all namespaces +[AfterEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:188 +Sep 7 08:39:50.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-6091" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":356,"completed":232,"skipped":4673,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:39:50.053: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:39:56.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-5903" for this suite. +STEP: Destroying namespace "nsdeletetest-3734" for this suite. +Sep 7 08:39:56.206: INFO: Namespace nsdeletetest-3734 was already deleted +STEP: Destroying namespace "nsdeletetest-383" for this suite. + +• [SLOW TEST:6.158 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":356,"completed":233,"skipped":4683,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:39:56.211: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating the pod +Sep 7 08:39:56.265: INFO: The status of Pod annotationupdated3d6e966-ec16-47de-95a4-d4175a9f7536 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:39:58.272: INFO: The status of Pod annotationupdated3d6e966-ec16-47de-95a4-d4175a9f7536 is Running (Ready = true) +Sep 7 08:39:58.797: INFO: Successfully updated pod "annotationupdated3d6e966-ec16-47de-95a4-d4175a9f7536" +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 08:40:02.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7003" for this suite. + +• [SLOW TEST:6.621 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":356,"completed":234,"skipped":4706,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:40:02.832: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename tables +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/apimachinery/table_conversion.go:49 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/framework.go:188 +Sep 7 08:40:02.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-4949" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":356,"completed":235,"skipped":4725,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:40:02.900: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name secret-test-00cbc581-2d1f-430a-b796-1f638471f5c8 +STEP: Creating a pod to test consume secrets +Sep 7 08:40:02.949: INFO: Waiting up to 5m0s for pod "pod-secrets-fce59044-db22-425f-bcb8-2730668817c9" in namespace "secrets-194" to be "Succeeded or Failed" +Sep 7 08:40:02.952: INFO: Pod "pod-secrets-fce59044-db22-425f-bcb8-2730668817c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.049114ms +Sep 7 08:40:04.964: INFO: Pod "pod-secrets-fce59044-db22-425f-bcb8-2730668817c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015307223s +Sep 7 08:40:06.973: INFO: Pod "pod-secrets-fce59044-db22-425f-bcb8-2730668817c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023494695s +STEP: Saw pod success +Sep 7 08:40:06.973: INFO: Pod "pod-secrets-fce59044-db22-425f-bcb8-2730668817c9" satisfied condition "Succeeded or Failed" +Sep 7 08:40:06.977: INFO: Trying to get logs from node 172.31.51.96 pod pod-secrets-fce59044-db22-425f-bcb8-2730668817c9 container secret-volume-test: +STEP: delete the pod +Sep 7 08:40:07.010: INFO: Waiting for pod pod-secrets-fce59044-db22-425f-bcb8-2730668817c9 to disappear +Sep 7 08:40:07.014: INFO: Pod pod-secrets-fce59044-db22-425f-bcb8-2730668817c9 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:188 +Sep 7 08:40:07.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-194" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":356,"completed":236,"skipped":4735,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:40:07.027: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name projected-configmap-test-volume-e709cad2-6ba1-4203-b51b-5791b17573f3 +STEP: Creating a pod to test consume configMaps +Sep 7 08:40:07.128: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af728c7b-5dc5-455d-9f89-1aa9513c4bdb" in namespace "projected-1606" to be "Succeeded or Failed" +Sep 7 08:40:07.143: INFO: Pod "pod-projected-configmaps-af728c7b-5dc5-455d-9f89-1aa9513c4bdb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.659544ms +Sep 7 08:40:09.168: INFO: Pod "pod-projected-configmaps-af728c7b-5dc5-455d-9f89-1aa9513c4bdb": Phase="Running", Reason="", readiness=true. Elapsed: 2.04046116s +Sep 7 08:40:11.216: INFO: Pod "pod-projected-configmaps-af728c7b-5dc5-455d-9f89-1aa9513c4bdb": Phase="Running", Reason="", readiness=false. Elapsed: 4.088788592s +Sep 7 08:40:13.224: INFO: Pod "pod-projected-configmaps-af728c7b-5dc5-455d-9f89-1aa9513c4bdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096691368s +STEP: Saw pod success +Sep 7 08:40:13.224: INFO: Pod "pod-projected-configmaps-af728c7b-5dc5-455d-9f89-1aa9513c4bdb" satisfied condition "Succeeded or Failed" +Sep 7 08:40:13.228: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-configmaps-af728c7b-5dc5-455d-9f89-1aa9513c4bdb container agnhost-container: +STEP: delete the pod +Sep 7 08:40:13.246: INFO: Waiting for pod pod-projected-configmaps-af728c7b-5dc5-455d-9f89-1aa9513c4bdb to disappear +Sep 7 08:40:13.249: INFO: Pod pod-projected-configmaps-af728c7b-5dc5-455d-9f89-1aa9513c4bdb no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:188 +Sep 7 08:40:13.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1606" for this suite. + +• [SLOW TEST:6.230 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":237,"skipped":4756,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:40:13.258: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-3681 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-3681 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3681 +Sep 7 08:40:13.317: INFO: Found 0 stateful pods, waiting for 1 +Sep 7 08:40:23.332: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Sep 7 08:40:23.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-3681 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Sep 7 08:40:23.530: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Sep 7 08:40:23.530: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Sep 7 08:40:23.530: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Sep 7 08:40:23.534: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Sep 7 08:40:33.542: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Sep 7 08:40:33.542: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 08:40:33.569: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999741s +Sep 7 08:40:34.579: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.981460877s +Sep 7 08:40:35.592: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.972819808s +Sep 7 08:40:36.600: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.959210541s +Sep 7 08:40:37.608: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.951048808s +Sep 7 08:40:38.619: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.942354184s +Sep 7 08:40:39.627: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.930841042s +Sep 7 08:40:40.636: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.923453066s +Sep 7 08:40:41.643: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.914460629s +Sep 7 08:40:42.649: INFO: Verifying statefulset ss doesn't scale past 1 for another 906.940342ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3681 +Sep 7 08:40:43.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-3681 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Sep 7 08:40:43.879: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Sep 7 08:40:43.879: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Sep 7 08:40:43.879: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Sep 7 08:40:43.885: INFO: Found 1 stateful pods, waiting for 3 +Sep 7 08:40:53.895: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 08:40:53.895: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 08:40:53.895: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Sep 7 08:40:53.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-3681 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Sep 7 08:40:54.103: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Sep 7 08:40:54.103: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Sep 7 08:40:54.103: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Sep 7 08:40:54.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-3681 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Sep 7 08:40:54.269: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Sep 7 08:40:54.269: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Sep 7 08:40:54.269: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Sep 7 08:40:54.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-3681 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Sep 7 08:40:54.463: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Sep 7 08:40:54.463: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Sep 7 08:40:54.463: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Sep 7 08:40:54.463: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 08:40:54.467: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Sep 7 08:41:04.476: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Sep 7 08:41:04.476: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Sep 7 08:41:04.476: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Sep 7 08:41:04.508: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999745s +Sep 7 08:41:05.520: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983086015s +Sep 7 08:41:06.527: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972410928s +Sep 7 08:41:07.538: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.96460858s +Sep 7 08:41:08.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.95386474s +Sep 7 08:41:09.559: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.942920445s +Sep 7 08:41:10.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.933019923s +Sep 7 08:41:11.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.924023796s +Sep 7 08:41:12.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.91658449s +Sep 7 08:41:13.594: INFO: Verifying statefulset ss doesn't scale past 3 for another 905.995083ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3681 +Sep 7 08:41:14.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-3681 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Sep 7 08:41:14.784: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Sep 7 08:41:14.784: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Sep 7 08:41:14.784: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Sep 7 08:41:14.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-3681 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Sep 7 08:41:14.965: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Sep 7 08:41:14.965: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Sep 7 08:41:14.965: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Sep 7 08:41:14.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-3681 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Sep 7 08:41:15.199: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Sep 7 08:41:15.199: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Sep 7 08:41:15.199: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Sep 7 08:41:15.199: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Sep 7 08:41:25.231: INFO: Deleting all statefulset in ns statefulset-3681 +Sep 7 08:41:25.235: INFO: Scaling statefulset ss to 0 +Sep 7 08:41:25.246: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 08:41:25.249: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:188 +Sep 7 08:41:25.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-3681" for this suite. + +• [SLOW TEST:72.018 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":356,"completed":238,"skipped":4793,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] CSIStorageCapacity + should support CSIStorageCapacities API operations [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:41:25.276: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename csistoragecapacity +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support CSIStorageCapacities API operations [Conformance] + test/e2e/framework/framework.go:652 +STEP: getting /apis +STEP: getting /apis/storage.k8s.io +STEP: getting /apis/storage.k8s.io/v1 +STEP: creating +STEP: watching +Sep 7 08:41:25.401: INFO: starting watch +STEP: getting +STEP: listing in namespace +STEP: listing across namespaces +STEP: patching +STEP: updating +Sep 7 08:41:25.425: INFO: waiting for watch events with expected annotations in namespace +Sep 7 08:41:25.425: INFO: waiting for watch events with expected annotations across namespace +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/framework.go:188 +Sep 7 08:41:25.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "csistoragecapacity-7835" for this suite. +•{"msg":"PASSED [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]","total":356,"completed":239,"skipped":4809,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:41:25.452: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data +[It] should support subpaths with downward pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod pod-subpath-test-downwardapi-d668 +STEP: Creating a pod to test atomic-volume-subpath +Sep 7 08:41:25.504: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-d668" in namespace "subpath-8209" to be "Succeeded or Failed" +Sep 7 08:41:25.534: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Pending", Reason="", readiness=false. Elapsed: 30.210933ms +Sep 7 08:41:27.544: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=true. Elapsed: 2.039960026s +Sep 7 08:41:29.550: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=true. Elapsed: 4.046455638s +Sep 7 08:41:31.558: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=true. Elapsed: 6.054432189s +Sep 7 08:41:33.581: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=true. Elapsed: 8.077138939s +Sep 7 08:41:35.586: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=true. Elapsed: 10.08250829s +Sep 7 08:41:37.599: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=true. Elapsed: 12.09465748s +Sep 7 08:41:39.605: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=true. Elapsed: 14.101331893s +Sep 7 08:41:41.613: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=true. Elapsed: 16.109111176s +Sep 7 08:41:43.625: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=true. Elapsed: 18.121309763s +Sep 7 08:41:45.651: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=true. Elapsed: 20.147311424s +Sep 7 08:41:47.662: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Running", Reason="", readiness=false. Elapsed: 22.157900694s +Sep 7 08:41:49.669: INFO: Pod "pod-subpath-test-downwardapi-d668": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.164635707s +STEP: Saw pod success +Sep 7 08:41:49.669: INFO: Pod "pod-subpath-test-downwardapi-d668" satisfied condition "Succeeded or Failed" +Sep 7 08:41:49.671: INFO: Trying to get logs from node 172.31.51.96 pod pod-subpath-test-downwardapi-d668 container test-container-subpath-downwardapi-d668: +STEP: delete the pod +Sep 7 08:41:49.699: INFO: Waiting for pod pod-subpath-test-downwardapi-d668 to disappear +Sep 7 08:41:49.704: INFO: Pod pod-subpath-test-downwardapi-d668 no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-d668 +Sep 7 08:41:49.704: INFO: Deleting pod "pod-subpath-test-downwardapi-d668" in namespace "subpath-8209" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:188 +Sep 7 08:41:49.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-8209" for this suite. + +• [SLOW TEST:24.261 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with downward pod [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]","total":356,"completed":240,"skipped":4823,"failed":0} +SSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:41:49.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-5612 +STEP: changing the ExternalName service to type=NodePort +STEP: creating replication controller externalname-service in namespace services-5612 +I0907 08:41:49.799502 19 runners.go:193] Created replication controller with name: externalname-service, namespace: services-5612, replica count: 2 +Sep 7 08:41:52.850: INFO: Creating new exec pod +I0907 08:41:52.850694 19 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 08:41:55.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Sep 7 08:41:56.088: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Sep 7 08:41:56.088: INFO: stdout: "" +Sep 7 08:41:57.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Sep 7 08:41:57.392: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Sep 7 08:41:57.392: INFO: stdout: "externalname-service-7vhnp" +Sep 7 08:41:57.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.223.0 80' +Sep 7 08:41:57.573: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.223.0 80\nConnection to 10.68.223.0 80 port [tcp/http] succeeded!\n" +Sep 7 08:41:57.573: INFO: stdout: "" +Sep 7 08:41:58.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.223.0 80' +Sep 7 08:41:58.762: INFO: stderr: "+ nc -v -t -w 2 10.68.223.0 80\n+ echo hostName\nConnection to 10.68.223.0 80 port [tcp/http] succeeded!\n" +Sep 7 08:41:58.762: INFO: stdout: "externalname-service-7vhnp" +Sep 7 08:41:58.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.96 31504' +Sep 7 08:41:58.940: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.51.96 31504\nConnection to 172.31.51.96 31504 port [tcp/*] succeeded!\n" +Sep 7 08:41:58.940: INFO: stdout: "" +Sep 7 08:41:59.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.96 31504' +Sep 7 08:42:00.125: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.51.96 31504\nConnection to 172.31.51.96 31504 port [tcp/*] succeeded!\n" +Sep 7 08:42:00.125: INFO: stdout: "" +Sep 7 08:42:00.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.96 31504' +Sep 7 08:42:01.132: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.51.96 31504\nConnection to 172.31.51.96 31504 port [tcp/*] succeeded!\n" +Sep 7 08:42:01.132: INFO: stdout: "" +Sep 7 08:42:01.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.96 31504' +Sep 7 08:42:02.147: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.51.96 31504\nConnection to 172.31.51.96 31504 port [tcp/*] succeeded!\n" +Sep 7 08:42:02.147: INFO: stdout: "" +Sep 7 08:42:02.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.96 31504' +Sep 7 08:42:03.104: INFO: stderr: "+ nc -v -t -w 2 172.31.51.96 31504\n+ echo hostName\nConnection to 172.31.51.96 31504 port [tcp/*] succeeded!\n" +Sep 7 08:42:03.104: INFO: stdout: "externalname-service-vz9v5" +Sep 7 08:42:03.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.97 31504' +Sep 7 08:42:03.292: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.51.97 31504\nConnection to 172.31.51.97 31504 port [tcp/*] succeeded!\n" +Sep 7 08:42:03.293: INFO: stdout: "" +Sep 7 08:42:04.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-5612 exec execpod2mt8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.97 31504' +Sep 7 08:42:04.479: INFO: stderr: "+ nc -v -t -w 2 172.31.51.97 31504\n+ echo hostName\nConnection to 172.31.51.97 31504 port [tcp/*] succeeded!\n" +Sep 7 08:42:04.479: INFO: stdout: "externalname-service-7vhnp" +Sep 7 08:42:04.479: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:42:04.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5612" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:14.821 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":356,"completed":241,"skipped":4829,"failed":0} +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:42:04.535: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Sep 7 08:42:08.709: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:188 +Sep 7 08:42:08.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-299" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":356,"completed":242,"skipped":4829,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:42:08.747: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide DNS for ExternalName services [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a test externalName service +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8366.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8366.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local; sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Sep 7 08:42:12.891: INFO: DNS probes using dns-test-0d05ec00-36bd-44b6-ae4a-cf3f9c8b3fa1 succeeded + +STEP: deleting the pod +STEP: changing the externalName to bar.example.com +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8366.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8366.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local; sleep 1; done + +STEP: creating a second pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Sep 7 08:42:16.946: INFO: File wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local from pod dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 contains 'foo.example.com. +' instead of 'bar.example.com.' +Sep 7 08:42:16.949: INFO: File jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local from pod dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 contains 'foo.example.com. +' instead of 'bar.example.com.' +Sep 7 08:42:16.949: INFO: Lookups using dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 failed for: [wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local] + +Sep 7 08:42:21.953: INFO: File wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local from pod dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 contains 'foo.example.com. +' instead of 'bar.example.com.' +Sep 7 08:42:21.956: INFO: File jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local from pod dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 contains 'foo.example.com. +' instead of 'bar.example.com.' +Sep 7 08:42:21.956: INFO: Lookups using dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 failed for: [wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local] + +Sep 7 08:42:26.953: INFO: File wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local from pod dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 contains 'foo.example.com. +' instead of 'bar.example.com.' +Sep 7 08:42:26.958: INFO: File jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local from pod dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 contains 'foo.example.com. +' instead of 'bar.example.com.' +Sep 7 08:42:26.958: INFO: Lookups using dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 failed for: [wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local] + +Sep 7 08:42:31.955: INFO: File wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local from pod dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 contains 'foo.example.com. +' instead of 'bar.example.com.' +Sep 7 08:42:31.959: INFO: File jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local from pod dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 contains 'foo.example.com. +' instead of 'bar.example.com.' +Sep 7 08:42:31.959: INFO: Lookups using dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 failed for: [wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local] + +Sep 7 08:42:36.953: INFO: File wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local from pod dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 contains 'foo.example.com. +' instead of 'bar.example.com.' +Sep 7 08:42:36.957: INFO: File jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local from pod dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 contains 'foo.example.com. +' instead of 'bar.example.com.' +Sep 7 08:42:36.957: INFO: Lookups using dns-8366/dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 failed for: [wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local] + +Sep 7 08:42:41.955: INFO: DNS probes using dns-test-94663fd2-76c2-4e3a-a0dd-55baed1a07e2 succeeded + +STEP: deleting the pod +STEP: changing the service to type=ClusterIP +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8366.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8366.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8366.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8366.svc.cluster.local; sleep 1; done + +STEP: creating a third pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Sep 7 08:42:46.088: INFO: DNS probes using dns-test-11d55abe-7ae9-4900-9d8b-e57df3485a8a succeeded + +STEP: deleting the pod +STEP: deleting the test externalName service +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:188 +Sep 7 08:42:46.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8366" for this suite. + +• [SLOW TEST:37.382 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for ExternalName services [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":356,"completed":243,"skipped":4870,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:42:46.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicationController +STEP: Ensuring resource quota status captures replication controller creation +STEP: Deleting a ReplicationController +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:188 +Sep 7 08:42:57.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-1516" for this suite. + +• [SLOW TEST:11.146 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":356,"completed":244,"skipped":4884,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:42:57.275: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:42:58.058: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:43:01.092: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:43:01.096: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7120-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource while v1 is storage version +STEP: Patching Custom Resource Definition to set v2 as storage +STEP: Patching the custom resource while v2 is storage version +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:43:04.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4095" for this suite. +STEP: Destroying namespace "webhook-4095-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:7.099 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource with different stored version [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":356,"completed":245,"skipped":4898,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:43:04.374: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:43:05.899: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:43:08.927: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:43:08.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:43:12.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5294" for this suite. +STEP: Destroying namespace "webhook-5294-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:7.909 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":356,"completed":246,"skipped":4906,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:43:12.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] watch on custom resource definition objects [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:43:12.352: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Creating first CR +Sep 7 08:43:14.988: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-09-07T08:43:14Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-09-07T08:43:14Z]] name:name1 resourceVersion:22866 uid:03c7239d-de76-44c2-a1d1-8dbd21ff1ce8] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR +Sep 7 08:43:25.005: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-09-07T08:43:25Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-09-07T08:43:25Z]] name:name2 resourceVersion:22899 uid:2a319b5a-7e56-4c80-ac4b-cc4b28610c88] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR +Sep 7 08:43:35.018: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-09-07T08:43:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-09-07T08:43:35Z]] name:name1 resourceVersion:22913 uid:03c7239d-de76-44c2-a1d1-8dbd21ff1ce8] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR +Sep 7 08:43:45.029: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-09-07T08:43:25Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-09-07T08:43:45Z]] name:name2 resourceVersion:22927 uid:2a319b5a-7e56-4c80-ac4b-cc4b28610c88] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR +Sep 7 08:43:55.043: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-09-07T08:43:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-09-07T08:43:35Z]] name:name1 resourceVersion:22942 uid:03c7239d-de76-44c2-a1d1-8dbd21ff1ce8] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR +Sep 7 08:44:05.054: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-09-07T08:43:25Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-09-07T08:43:45Z]] name:name2 resourceVersion:22957 uid:2a319b5a-7e56-4c80-ac4b-cc4b28610c88] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:44:15.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-209" for this suite. + +• [SLOW TEST:63.359 seconds] +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + CustomResourceDefinition Watch + test/e2e/apimachinery/crd_watch.go:44 + watch on custom resource definition objects [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":356,"completed":247,"skipped":4938,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:44:15.643: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating service in namespace services-2265 +Sep 7 08:44:15.718: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:44:17.730: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Sep 7 08:44:17.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2265 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Sep 7 08:44:17.938: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Sep 7 08:44:17.938: INFO: stdout: "ipvs" +Sep 7 08:44:17.938: INFO: proxyMode: ipvs +Sep 7 08:44:17.952: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Sep 7 08:44:17.955: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-clusterip-timeout in namespace services-2265 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-2265 +I0907 08:44:17.972655 19 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2265, replica count: 3 +I0907 08:44:21.023862 19 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 08:44:21.063: INFO: Creating new exec pod +Sep 7 08:44:24.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2265 exec execpod-affinitynjz9h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' +Sep 7 08:44:24.310: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Sep 7 08:44:24.310: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:44:24.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2265 exec execpod-affinitynjz9h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.226.148 80' +Sep 7 08:44:24.486: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.226.148 80\nConnection to 10.68.226.148 80 port [tcp/http] succeeded!\n" +Sep 7 08:44:24.486: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:44:24.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2265 exec execpod-affinitynjz9h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.68.226.148:80/ ; done' +Sep 7 08:44:24.812: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n" +Sep 7 08:44:24.812: INFO: stdout: "\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9\naffinity-clusterip-timeout-fdgh9" +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Received response from host: affinity-clusterip-timeout-fdgh9 +Sep 7 08:44:24.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2265 exec execpod-affinitynjz9h -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.68.226.148:80/' +Sep 7 08:44:24.998: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n" +Sep 7 08:44:24.998: INFO: stdout: "affinity-clusterip-timeout-fdgh9" +Sep 7 08:46:35.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2265 exec execpod-affinitynjz9h -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.68.226.148:80/' +Sep 7 08:46:35.184: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.68.226.148:80/\n" +Sep 7 08:46:35.184: INFO: stdout: "affinity-clusterip-timeout-2w68l" +Sep 7 08:46:35.184: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-2265, will wait for the garbage collector to delete the pods +Sep 7 08:46:35.277: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.317477ms +Sep 7 08:46:35.490: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 212.667347ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:46:38.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2265" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:142.759 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":356,"completed":248,"skipped":4947,"failed":0} +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:46:38.402: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pod +STEP: waiting for pod running +STEP: creating a file in subpath +Sep 7 08:46:40.460: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-7906 PodName:var-expansion-e270db0c-f99b-417d-99e6-fbba21271d57 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 08:46:40.460: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:46:40.461: INFO: ExecWithOptions: Clientset creation +Sep 7 08:46:40.461: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/var-expansion-7906/pods/var-expansion-e270db0c-f99b-417d-99e6-fbba21271d57/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) +STEP: test for file in mounted path +Sep 7 08:46:40.546: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-7906 PodName:var-expansion-e270db0c-f99b-417d-99e6-fbba21271d57 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 08:46:40.546: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:46:40.547: INFO: ExecWithOptions: Clientset creation +Sep 7 08:46:40.547: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/var-expansion-7906/pods/var-expansion-e270db0c-f99b-417d-99e6-fbba21271d57/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) +STEP: updating the annotation value +Sep 7 08:46:41.126: INFO: Successfully updated pod "var-expansion-e270db0c-f99b-417d-99e6-fbba21271d57" +STEP: waiting for annotated pod running +STEP: deleting the pod gracefully +Sep 7 08:46:41.132: INFO: Deleting pod "var-expansion-e270db0c-f99b-417d-99e6-fbba21271d57" in namespace "var-expansion-7906" +Sep 7 08:46:41.142: INFO: Wait up to 5m0s for pod "var-expansion-e270db0c-f99b-417d-99e6-fbba21271d57" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:188 +Sep 7 08:47:15.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-7906" for this suite. + +• [SLOW TEST:36.767 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":356,"completed":249,"skipped":4947,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:47:15.170: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 08:47:15.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30aeaccc-d399-41c8-868d-547c7be17e44" in namespace "projected-6794" to be "Succeeded or Failed" +Sep 7 08:47:15.222: INFO: Pod "downwardapi-volume-30aeaccc-d399-41c8-868d-547c7be17e44": Phase="Pending", Reason="", readiness=false. Elapsed: 5.923023ms +Sep 7 08:47:17.233: INFO: Pod "downwardapi-volume-30aeaccc-d399-41c8-868d-547c7be17e44": Phase="Running", Reason="", readiness=true. Elapsed: 2.016372949s +Sep 7 08:47:19.246: INFO: Pod "downwardapi-volume-30aeaccc-d399-41c8-868d-547c7be17e44": Phase="Running", Reason="", readiness=false. Elapsed: 4.029470567s +Sep 7 08:47:21.253: INFO: Pod "downwardapi-volume-30aeaccc-d399-41c8-868d-547c7be17e44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036734534s +STEP: Saw pod success +Sep 7 08:47:21.253: INFO: Pod "downwardapi-volume-30aeaccc-d399-41c8-868d-547c7be17e44" satisfied condition "Succeeded or Failed" +Sep 7 08:47:21.256: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-30aeaccc-d399-41c8-868d-547c7be17e44 container client-container: +STEP: delete the pod +Sep 7 08:47:21.295: INFO: Waiting for pod downwardapi-volume-30aeaccc-d399-41c8-868d-547c7be17e44 to disappear +Sep 7 08:47:21.307: INFO: Pod downwardapi-volume-30aeaccc-d399-41c8-868d-547c7be17e44 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 08:47:21.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6794" for this suite. + +• [SLOW TEST:6.152 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":356,"completed":250,"skipped":4986,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:47:21.322: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test env composition +Sep 7 08:47:21.386: INFO: Waiting up to 5m0s for pod "var-expansion-9872b15b-4cd4-4363-9fe7-442c8dfe0277" in namespace "var-expansion-5453" to be "Succeeded or Failed" +Sep 7 08:47:21.401: INFO: Pod "var-expansion-9872b15b-4cd4-4363-9fe7-442c8dfe0277": Phase="Pending", Reason="", readiness=false. Elapsed: 14.667822ms +Sep 7 08:47:23.404: INFO: Pod "var-expansion-9872b15b-4cd4-4363-9fe7-442c8dfe0277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01833951s +Sep 7 08:47:25.416: INFO: Pod "var-expansion-9872b15b-4cd4-4363-9fe7-442c8dfe0277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030368163s +STEP: Saw pod success +Sep 7 08:47:25.416: INFO: Pod "var-expansion-9872b15b-4cd4-4363-9fe7-442c8dfe0277" satisfied condition "Succeeded or Failed" +Sep 7 08:47:25.419: INFO: Trying to get logs from node 172.31.51.96 pod var-expansion-9872b15b-4cd4-4363-9fe7-442c8dfe0277 container dapi-container: +STEP: delete the pod +Sep 7 08:47:25.442: INFO: Waiting for pod var-expansion-9872b15b-4cd4-4363-9fe7-442c8dfe0277 to disappear +Sep 7 08:47:25.449: INFO: Pod var-expansion-9872b15b-4cd4-4363-9fe7-442c8dfe0277 no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:188 +Sep 7 08:47:25.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5453" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":356,"completed":251,"skipped":4999,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Aggregator + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:47:25.458: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename aggregator +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:79 +Sep 7 08:47:25.489: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/framework/framework.go:652 +STEP: Registering the sample API server. +Sep 7 08:47:26.274: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Sep 7 08:47:28.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-d9646c97b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:47:30.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-d9646c97b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:47:32.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-d9646c97b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:47:34.385: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-d9646c97b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:47:36.358: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-d9646c97b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:47:38.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-d9646c97b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:47:40.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-d9646c97b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:47:42.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-d9646c97b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:47:44.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-d9646c97b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:47:46.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 47, 26, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-d9646c97b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Sep 7 08:47:48.500: INFO: Waited 127.517421ms for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' +STEP: List APIServices +Sep 7 08:47:48.588: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:69 +[AfterEach] [sig-api-machinery] Aggregator + test/e2e/framework/framework.go:188 +Sep 7 08:47:49.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-9718" for this suite. + +• [SLOW TEST:23.660 seconds] +[sig-api-machinery] Aggregator +test/e2e/apimachinery/framework.go:23 + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":356,"completed":252,"skipped":5008,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:47:49.119: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename taint-multiple-pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/node/taints.go:348 +Sep 7 08:47:49.204: INFO: Waiting up to 1m0s for all nodes to be ready +Sep 7 08:48:49.229: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:48:49.231: INFO: Starting informer... +STEP: Starting pods... +Sep 7 08:48:49.262: INFO: Pod1 is running on 172.31.51.96. Tainting Node +Sep 7 08:48:53.500: INFO: Pod2 is running on 172.31.51.96. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Sep 7 08:48:59.760: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Sep 7 08:49:19.819: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:49:19.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-7843" for this suite. + +• [SLOW TEST:90.741 seconds] +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] +test/e2e/node/framework.go:23 + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":356,"completed":253,"skipped":5054,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:49:19.861: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Sep 7 08:49:29.965: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:188 +Sep 7 08:49:29.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W0907 08:49:29.965464 19 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-3011" for this suite. + +• [SLOW TEST:10.116 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should delete pods created by rc when not orphaning [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":356,"completed":254,"skipped":5057,"failed":0} +S +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:49:29.977: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-7930 +[It] should have a working scale subresource [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating statefulset ss in namespace statefulset-7930 +Sep 7 08:49:30.034: INFO: Found 0 stateful pods, waiting for 1 +Sep 7 08:49:40.048: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +STEP: Patch a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Sep 7 08:49:40.087: INFO: Deleting all statefulset in ns statefulset-7930 +Sep 7 08:49:40.095: INFO: Scaling statefulset ss to 0 +Sep 7 08:49:50.134: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 08:49:50.137: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:188 +Sep 7 08:49:50.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7930" for this suite. + +• [SLOW TEST:20.225 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + should have a working scale subresource [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":356,"completed":255,"skipped":5058,"failed":0} +[sig-node] Container Runtime blackbox test on terminated container + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:49:50.201: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Sep 7 08:49:54.356: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:188 +Sep 7 08:49:54.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-4400" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":356,"completed":256,"skipped":5058,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] version v1 + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:49:54.387: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-dgds7 in namespace proxy-9005 +I0907 08:49:54.465679 19 runners.go:193] Created replication controller with name: proxy-service-dgds7, namespace: proxy-9005, replica count: 1 +I0907 08:49:55.523674 19 runners.go:193] proxy-service-dgds7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0907 08:49:56.524580 19 runners.go:193] proxy-service-dgds7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0907 08:49:57.527601 19 runners.go:193] proxy-service-dgds7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0907 08:49:58.528523 19 runners.go:193] proxy-service-dgds7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0907 08:49:59.532416 19 runners.go:193] proxy-service-dgds7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 08:49:59.557: INFO: setup took 5.11052557s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Sep 7 08:49:59.581: INFO: (0) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 22.8424ms) +Sep 7 08:49:59.581: INFO: (0) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 23.372326ms) +Sep 7 08:49:59.581: INFO: (0) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 23.664523ms) +Sep 7 08:49:59.598: INFO: (0) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 40.781233ms) +Sep 7 08:49:59.602: INFO: (0) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 44.680556ms) +Sep 7 08:49:59.602: INFO: (0) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 44.973641ms) +Sep 7 08:49:59.604: INFO: (0) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 46.381661ms) +Sep 7 08:49:59.604: INFO: (0) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 46.469942ms) +Sep 7 08:49:59.604: INFO: (0) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 46.321702ms) +Sep 7 08:49:59.604: INFO: (0) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 45.947226ms) +Sep 7 08:49:59.604: INFO: (0) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: ... (200; 51.19337ms) +Sep 7 08:49:59.623: INFO: (1) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 13.651081ms) +Sep 7 08:49:59.628: INFO: (1) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 18.508788ms) +Sep 7 08:49:59.630: INFO: (1) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 20.027418ms) +Sep 7 08:49:59.633: INFO: (1) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 24.097901ms) +Sep 7 08:49:59.633: INFO: (1) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 23.932291ms) +Sep 7 08:49:59.633: INFO: (1) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 23.567247ms) +Sep 7 08:49:59.633: INFO: (1) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 24.042499ms) +Sep 7 08:49:59.633: INFO: (1) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: ... (200; 25.964851ms) +Sep 7 08:49:59.636: INFO: (1) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 25.760648ms) +Sep 7 08:49:59.638: INFO: (1) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 27.898311ms) +Sep 7 08:49:59.638: INFO: (1) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 27.624885ms) +Sep 7 08:49:59.638: INFO: (1) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 27.495935ms) +Sep 7 08:49:59.638: INFO: (1) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 27.987287ms) +Sep 7 08:49:59.638: INFO: (1) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 27.710015ms) +Sep 7 08:49:59.639: INFO: (1) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 29.431703ms) +Sep 7 08:49:59.653: INFO: (2) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 13.523696ms) +Sep 7 08:49:59.653: INFO: (2) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 13.654456ms) +Sep 7 08:49:59.653: INFO: (2) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 13.726555ms) +Sep 7 08:49:59.653: INFO: (2) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 13.521978ms) +Sep 7 08:49:59.661: INFO: (2) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 21.824056ms) +Sep 7 08:49:59.661: INFO: (2) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: test (200; 21.139781ms) +Sep 7 08:49:59.661: INFO: (2) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 21.269494ms) +Sep 7 08:49:59.661: INFO: (2) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 21.60765ms) +Sep 7 08:49:59.661: INFO: (2) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 21.12603ms) +Sep 7 08:49:59.661: INFO: (2) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 21.788385ms) +Sep 7 08:49:59.661: INFO: (2) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 21.415273ms) +Sep 7 08:49:59.661: INFO: (2) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 21.876142ms) +Sep 7 08:49:59.662: INFO: (2) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 22.364094ms) +Sep 7 08:49:59.662: INFO: (2) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 23.202254ms) +Sep 7 08:49:59.662: INFO: (2) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 22.520314ms) +Sep 7 08:49:59.677: INFO: (3) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 14.582359ms) +Sep 7 08:49:59.677: INFO: (3) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 14.182534ms) +Sep 7 08:49:59.677: INFO: (3) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 14.140343ms) +Sep 7 08:49:59.677: INFO: (3) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 14.283958ms) +Sep 7 08:49:59.677: INFO: (3) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 14.467763ms) +Sep 7 08:49:59.677: INFO: (3) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: test<... (200; 14.373356ms) +Sep 7 08:49:59.677: INFO: (3) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 14.442266ms) +Sep 7 08:49:59.688: INFO: (3) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 24.356218ms) +Sep 7 08:49:59.688: INFO: (3) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 24.264403ms) +Sep 7 08:49:59.688: INFO: (3) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 24.438863ms) +Sep 7 08:49:59.700: INFO: (3) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 36.919054ms) +Sep 7 08:49:59.702: INFO: (3) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 38.161573ms) +Sep 7 08:49:59.702: INFO: (3) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 38.233091ms) +Sep 7 08:49:59.705: INFO: (3) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 41.894247ms) +Sep 7 08:49:59.717: INFO: (3) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 54.614106ms) +Sep 7 08:49:59.757: INFO: (4) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 39.510372ms) +Sep 7 08:49:59.757: INFO: (4) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 39.839746ms) +Sep 7 08:49:59.757: INFO: (4) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 39.514646ms) +Sep 7 08:49:59.757: INFO: (4) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: ... (200; 41.484303ms) +Sep 7 08:49:59.759: INFO: (4) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 41.854596ms) +Sep 7 08:49:59.759: INFO: (4) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 41.125302ms) +Sep 7 08:49:59.759: INFO: (4) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 41.803541ms) +Sep 7 08:49:59.759: INFO: (4) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 41.70786ms) +Sep 7 08:49:59.759: INFO: (4) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 41.366748ms) +Sep 7 08:49:59.759: INFO: (4) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 41.521251ms) +Sep 7 08:49:59.761: INFO: (4) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 43.439637ms) +Sep 7 08:49:59.773: INFO: (5) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 12.476587ms) +Sep 7 08:49:59.782: INFO: (5) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 21.142805ms) +Sep 7 08:49:59.782: INFO: (5) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 21.440379ms) +Sep 7 08:49:59.782: INFO: (5) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 21.147204ms) +Sep 7 08:49:59.787: INFO: (5) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 26.392461ms) +Sep 7 08:49:59.787: INFO: (5) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 26.13054ms) +Sep 7 08:49:59.787: INFO: (5) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 26.043376ms) +Sep 7 08:49:59.787: INFO: (5) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 26.317756ms) +Sep 7 08:49:59.787: INFO: (5) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 26.116201ms) +Sep 7 08:49:59.798: INFO: (5) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 36.705821ms) +Sep 7 08:49:59.798: INFO: (5) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 36.677401ms) +Sep 7 08:49:59.798: INFO: (5) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 36.781932ms) +Sep 7 08:49:59.798: INFO: (5) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 37.20812ms) +Sep 7 08:49:59.798: INFO: (5) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 36.850027ms) +Sep 7 08:49:59.798: INFO: (5) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: test (200; 15.62599ms) +Sep 7 08:49:59.814: INFO: (6) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 16.203882ms) +Sep 7 08:49:59.814: INFO: (6) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 16.007638ms) +Sep 7 08:49:59.814: INFO: (6) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 16.104557ms) +Sep 7 08:49:59.814: INFO: (6) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 15.816529ms) +Sep 7 08:49:59.814: INFO: (6) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: ... (200; 24.192436ms) +Sep 7 08:49:59.823: INFO: (6) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 24.427274ms) +Sep 7 08:49:59.823: INFO: (6) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 24.38119ms) +Sep 7 08:49:59.827: INFO: (6) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 28.111681ms) +Sep 7 08:49:59.827: INFO: (6) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 28.520956ms) +Sep 7 08:49:59.827: INFO: (6) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 28.085956ms) +Sep 7 08:49:59.843: INFO: (7) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 16.14793ms) +Sep 7 08:49:59.843: INFO: (7) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 15.470925ms) +Sep 7 08:49:59.843: INFO: (7) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 15.527051ms) +Sep 7 08:49:59.843: INFO: (7) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 15.581937ms) +Sep 7 08:49:59.843: INFO: (7) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 15.638701ms) +Sep 7 08:49:59.843: INFO: (7) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 15.958382ms) +Sep 7 08:49:59.843: INFO: (7) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: test<... (200; 15.357235ms) +Sep 7 08:49:59.866: INFO: (8) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 15.49114ms) +Sep 7 08:49:59.870: INFO: (8) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 19.10488ms) +Sep 7 08:49:59.876: INFO: (8) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 24.715857ms) +Sep 7 08:49:59.876: INFO: (8) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 24.638379ms) +Sep 7 08:49:59.876: INFO: (8) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 24.994806ms) +Sep 7 08:49:59.876: INFO: (8) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 24.71918ms) +Sep 7 08:49:59.876: INFO: (8) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 24.843552ms) +Sep 7 08:49:59.876: INFO: (8) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: ... (200; 30.424617ms) +Sep 7 08:49:59.882: INFO: (8) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 31.099615ms) +Sep 7 08:49:59.909: INFO: (9) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 27.425091ms) +Sep 7 08:49:59.909: INFO: (9) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 27.311978ms) +Sep 7 08:49:59.909: INFO: (9) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 27.25266ms) +Sep 7 08:49:59.909: INFO: (9) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 26.839271ms) +Sep 7 08:49:59.909: INFO: (9) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 26.591459ms) +Sep 7 08:49:59.909: INFO: (9) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 27.001914ms) +Sep 7 08:49:59.909: INFO: (9) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 26.914767ms) +Sep 7 08:49:59.909: INFO: (9) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 26.802941ms) +Sep 7 08:49:59.910: INFO: (9) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 27.001603ms) +Sep 7 08:49:59.910: INFO: (9) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: ... (200; 23.860914ms) +Sep 7 08:49:59.936: INFO: (10) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: test<... (200; 26.086325ms) +Sep 7 08:49:59.936: INFO: (10) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 26.209812ms) +Sep 7 08:49:59.938: INFO: (10) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 28.442303ms) +Sep 7 08:49:59.938: INFO: (10) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 28.106089ms) +Sep 7 08:49:59.940: INFO: (10) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 29.410728ms) +Sep 7 08:49:59.956: INFO: (11) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 15.926508ms) +Sep 7 08:49:59.956: INFO: (11) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 15.72665ms) +Sep 7 08:49:59.956: INFO: (11) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: test (200; 15.60687ms) +Sep 7 08:49:59.956: INFO: (11) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 15.874245ms) +Sep 7 08:49:59.956: INFO: (11) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 15.476927ms) +Sep 7 08:49:59.956: INFO: (11) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 16.297651ms) +Sep 7 08:49:59.960: INFO: (11) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 20.443792ms) +Sep 7 08:49:59.960: INFO: (11) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 20.390122ms) +Sep 7 08:49:59.960: INFO: (11) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 20.270517ms) +Sep 7 08:49:59.963: INFO: (11) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 22.78725ms) +Sep 7 08:49:59.963: INFO: (11) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 22.659414ms) +Sep 7 08:49:59.965: INFO: (11) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 24.53263ms) +Sep 7 08:49:59.965: INFO: (11) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 24.421546ms) +Sep 7 08:49:59.965: INFO: (11) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 24.720313ms) +Sep 7 08:49:59.965: INFO: (11) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 24.540445ms) +Sep 7 08:49:59.982: INFO: (12) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 16.140151ms) +Sep 7 08:49:59.982: INFO: (12) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 17.018298ms) +Sep 7 08:49:59.982: INFO: (12) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 16.670612ms) +Sep 7 08:49:59.982: INFO: (12) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 16.845793ms) +Sep 7 08:49:59.988: INFO: (12) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 22.416074ms) +Sep 7 08:49:59.994: INFO: (12) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 27.896823ms) +Sep 7 08:49:59.994: INFO: (12) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 28.154862ms) +Sep 7 08:49:59.994: INFO: (12) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 28.215522ms) +Sep 7 08:49:59.994: INFO: (12) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 28.613201ms) +Sep 7 08:49:59.994: INFO: (12) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 28.01064ms) +Sep 7 08:49:59.994: INFO: (12) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 28.119474ms) +Sep 7 08:49:59.994: INFO: (12) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 28.320497ms) +Sep 7 08:49:59.994: INFO: (12) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 28.479804ms) +Sep 7 08:49:59.994: INFO: (12) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 28.107483ms) +Sep 7 08:49:59.994: INFO: (12) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: test<... (200; 15.855186ms) +Sep 7 08:50:00.017: INFO: (13) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 22.890405ms) +Sep 7 08:50:00.017: INFO: (13) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 22.590705ms) +Sep 7 08:50:00.017: INFO: (13) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 23.091622ms) +Sep 7 08:50:00.017: INFO: (13) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 22.762755ms) +Sep 7 08:50:00.017: INFO: (13) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 22.872507ms) +Sep 7 08:50:00.017: INFO: (13) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: ... (200; 22.576596ms) +Sep 7 08:50:00.021: INFO: (13) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 26.005431ms) +Sep 7 08:50:00.021: INFO: (13) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 26.145516ms) +Sep 7 08:50:00.021: INFO: (13) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 26.363874ms) +Sep 7 08:50:00.021: INFO: (13) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 26.287807ms) +Sep 7 08:50:00.021: INFO: (13) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 26.953456ms) +Sep 7 08:50:00.021: INFO: (13) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 26.45319ms) +Sep 7 08:50:00.023: INFO: (13) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 28.199058ms) +Sep 7 08:50:00.039: INFO: (14) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 15.130449ms) +Sep 7 08:50:00.039: INFO: (14) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 14.973008ms) +Sep 7 08:50:00.040: INFO: (14) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: test (200; 16.749545ms) +Sep 7 08:50:00.040: INFO: (14) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 16.081557ms) +Sep 7 08:50:00.044: INFO: (14) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 21.199626ms) +Sep 7 08:50:00.044: INFO: (14) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 19.983574ms) +Sep 7 08:50:00.044: INFO: (14) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 20.98174ms) +Sep 7 08:50:00.044: INFO: (14) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 20.642187ms) +Sep 7 08:50:00.044: INFO: (14) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 20.69643ms) +Sep 7 08:50:00.044: INFO: (14) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 20.90658ms) +Sep 7 08:50:00.044: INFO: (14) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 20.866261ms) +Sep 7 08:50:00.044: INFO: (14) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 20.542114ms) +Sep 7 08:50:00.044: INFO: (14) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 20.987106ms) +Sep 7 08:50:00.067: INFO: (15) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 21.01282ms) +Sep 7 08:50:00.067: INFO: (15) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 22.162088ms) +Sep 7 08:50:00.067: INFO: (15) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 22.104732ms) +Sep 7 08:50:00.067: INFO: (15) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 21.853082ms) +Sep 7 08:50:00.067: INFO: (15) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:460/proxy/: tls baz (200; 21.237411ms) +Sep 7 08:50:00.067: INFO: (15) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:162/proxy/: bar (200; 21.518818ms) +Sep 7 08:50:00.067: INFO: (15) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: test<... (200; 19.342425ms) +Sep 7 08:50:00.099: INFO: (16) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 28.603457ms) +Sep 7 08:50:00.099: INFO: (16) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 29.197495ms) +Sep 7 08:50:00.099: INFO: (16) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 28.802577ms) +Sep 7 08:50:00.099: INFO: (16) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 29.12472ms) +Sep 7 08:50:00.099: INFO: (16) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 29.313249ms) +Sep 7 08:50:00.099: INFO: (16) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 29.04642ms) +Sep 7 08:50:00.099: INFO: (16) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 28.814389ms) +Sep 7 08:50:00.099: INFO: (16) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 28.989408ms) +Sep 7 08:50:00.099: INFO: (16) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 29.10131ms) +Sep 7 08:50:00.099: INFO: (16) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: ... (200; 30.922095ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 31.361164ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 31.293619ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 31.142447ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 31.207112ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 31.137988ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 31.792783ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 31.332469ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 31.433996ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 30.935219ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 30.989717ms) +Sep 7 08:50:00.131: INFO: (17) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: test<... (200; 22.163674ms) +Sep 7 08:50:00.154: INFO: (18) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 21.966766ms) +Sep 7 08:50:00.154: INFO: (18) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 21.882334ms) +Sep 7 08:50:00.154: INFO: (18) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 22.406721ms) +Sep 7 08:50:00.154: INFO: (18) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: ... (200; 21.616117ms) +Sep 7 08:50:00.154: INFO: (18) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 21.771258ms) +Sep 7 08:50:00.154: INFO: (18) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:462/proxy/: tls qux (200; 22.128873ms) +Sep 7 08:50:00.154: INFO: (18) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 21.708285ms) +Sep 7 08:50:00.162: INFO: (18) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 29.371527ms) +Sep 7 08:50:00.176: INFO: (19) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:160/proxy/: foo (200; 13.950853ms) +Sep 7 08:50:00.184: INFO: (19) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname1/proxy/: tls baz (200; 22.273519ms) +Sep 7 08:50:00.184: INFO: (19) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname2/proxy/: bar (200; 22.361065ms) +Sep 7 08:50:00.184: INFO: (19) /api/v1/namespaces/proxy-9005/services/https:proxy-service-dgds7:tlsportname2/proxy/: tls qux (200; 22.301896ms) +Sep 7 08:50:00.184: INFO: (19) /api/v1/namespaces/proxy-9005/services/proxy-service-dgds7:portname1/proxy/: foo (200; 22.440348ms) +Sep 7 08:50:00.187: INFO: (19) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:160/proxy/: foo (200; 19.880897ms) +Sep 7 08:50:00.187: INFO: (19) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql:1080/proxy/: test<... (200; 19.87375ms) +Sep 7 08:50:00.187: INFO: (19) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:1080/proxy/: ... (200; 25.401541ms) +Sep 7 08:50:00.187: INFO: (19) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname2/proxy/: bar (200; 19.7346ms) +Sep 7 08:50:00.187: INFO: (19) /api/v1/namespaces/proxy-9005/pods/proxy-service-dgds7-v9cql/proxy/: test (200; 19.711635ms) +Sep 7 08:50:00.187: INFO: (19) /api/v1/namespaces/proxy-9005/pods/http:proxy-service-dgds7-v9cql:162/proxy/: bar (200; 19.886143ms) +Sep 7 08:50:00.187: INFO: (19) /api/v1/namespaces/proxy-9005/services/http:proxy-service-dgds7:portname1/proxy/: foo (200; 19.867287ms) +Sep 7 08:50:00.187: INFO: (19) /api/v1/namespaces/proxy-9005/pods/https:proxy-service-dgds7-v9cql:443/proxy/: >> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:50:03.895: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Sep 7 08:50:05.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.September, 7, 8, 50, 3, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 50, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.September, 7, 8, 50, 3, 0, time.Local), LastTransitionTime:time.Date(2022, time.September, 7, 8, 50, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-68c7bd4684\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:50:08.942: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:50:08.953: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3871-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:50:12.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1169" for this suite. +STEP: Destroying namespace "webhook-1169-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + +• [SLOW TEST:9.046 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":356,"completed":258,"skipped":5073,"failed":0} +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:12.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name configmap-test-volume-map-67974924-7291-4378-bbce-86eabb1f8d63 +STEP: Creating a pod to test consume configMaps +Sep 7 08:50:12.280: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd3561d5-4e41-41ad-aa32-1f835cca91d1" in namespace "configmap-5392" to be "Succeeded or Failed" +Sep 7 08:50:12.356: INFO: Pod "pod-configmaps-cd3561d5-4e41-41ad-aa32-1f835cca91d1": Phase="Pending", Reason="", readiness=false. Elapsed: 75.888562ms +Sep 7 08:50:14.363: INFO: Pod "pod-configmaps-cd3561d5-4e41-41ad-aa32-1f835cca91d1": Phase="Running", Reason="", readiness=true. Elapsed: 2.082599343s +Sep 7 08:50:16.370: INFO: Pod "pod-configmaps-cd3561d5-4e41-41ad-aa32-1f835cca91d1": Phase="Running", Reason="", readiness=false. Elapsed: 4.090031782s +Sep 7 08:50:18.392: INFO: Pod "pod-configmaps-cd3561d5-4e41-41ad-aa32-1f835cca91d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111614171s +STEP: Saw pod success +Sep 7 08:50:18.392: INFO: Pod "pod-configmaps-cd3561d5-4e41-41ad-aa32-1f835cca91d1" satisfied condition "Succeeded or Failed" +Sep 7 08:50:18.396: INFO: Trying to get logs from node 172.31.51.96 pod pod-configmaps-cd3561d5-4e41-41ad-aa32-1f835cca91d1 container agnhost-container: +STEP: delete the pod +Sep 7 08:50:18.454: INFO: Waiting for pod pod-configmaps-cd3561d5-4e41-41ad-aa32-1f835cca91d1 to disappear +Sep 7 08:50:18.456: INFO: Pod pod-configmaps-cd3561d5-4e41-41ad-aa32-1f835cca91d1 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 08:50:18.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5392" for this suite. + +• [SLOW TEST:6.294 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":259,"skipped":5073,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:18.463: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0666 on node default medium +Sep 7 08:50:18.536: INFO: Waiting up to 5m0s for pod "pod-508257b0-2d83-4a62-9c2c-bce7eb0c3193" in namespace "emptydir-202" to be "Succeeded or Failed" +Sep 7 08:50:18.555: INFO: Pod "pod-508257b0-2d83-4a62-9c2c-bce7eb0c3193": Phase="Pending", Reason="", readiness=false. Elapsed: 19.237204ms +Sep 7 08:50:20.568: INFO: Pod "pod-508257b0-2d83-4a62-9c2c-bce7eb0c3193": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032268611s +Sep 7 08:50:22.581: INFO: Pod "pod-508257b0-2d83-4a62-9c2c-bce7eb0c3193": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045074308s +STEP: Saw pod success +Sep 7 08:50:22.581: INFO: Pod "pod-508257b0-2d83-4a62-9c2c-bce7eb0c3193" satisfied condition "Succeeded or Failed" +Sep 7 08:50:22.584: INFO: Trying to get logs from node 172.31.51.96 pod pod-508257b0-2d83-4a62-9c2c-bce7eb0c3193 container test-container: +STEP: delete the pod +Sep 7 08:50:22.599: INFO: Waiting for pod pod-508257b0-2d83-4a62-9c2c-bce7eb0c3193 to disappear +Sep 7 08:50:22.601: INFO: Pod pod-508257b0-2d83-4a62-9c2c-bce7eb0c3193 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:50:22.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-202" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":260,"skipped":5087,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:22.607: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sysctl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod with one valid and two invalid sysctls +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:188 +Sep 7 08:50:22.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-9352" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":356,"completed":261,"skipped":5125,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:22.672: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 08:50:22.705: INFO: Waiting up to 5m0s for pod "downwardapi-volume-488a2caa-6d74-4242-a778-922290ea574f" in namespace "downward-api-5071" to be "Succeeded or Failed" +Sep 7 08:50:22.709: INFO: Pod "downwardapi-volume-488a2caa-6d74-4242-a778-922290ea574f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400034ms +Sep 7 08:50:24.718: INFO: Pod "downwardapi-volume-488a2caa-6d74-4242-a778-922290ea574f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013307152s +Sep 7 08:50:26.726: INFO: Pod "downwardapi-volume-488a2caa-6d74-4242-a778-922290ea574f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02091658s +STEP: Saw pod success +Sep 7 08:50:26.726: INFO: Pod "downwardapi-volume-488a2caa-6d74-4242-a778-922290ea574f" satisfied condition "Succeeded or Failed" +Sep 7 08:50:26.729: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-488a2caa-6d74-4242-a778-922290ea574f container client-container: +STEP: delete the pod +Sep 7 08:50:26.746: INFO: Waiting for pod downwardapi-volume-488a2caa-6d74-4242-a778-922290ea574f to disappear +Sep 7 08:50:26.752: INFO: Pod downwardapi-volume-488a2caa-6d74-4242-a778-922290ea574f no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 08:50:26.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5071" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":262,"skipped":5141,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:26.762: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0777 on node default medium +Sep 7 08:50:26.810: INFO: Waiting up to 5m0s for pod "pod-71387113-0400-4e20-8bbd-64e55628b35d" in namespace "emptydir-1991" to be "Succeeded or Failed" +Sep 7 08:50:26.825: INFO: Pod "pod-71387113-0400-4e20-8bbd-64e55628b35d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.45541ms +Sep 7 08:50:28.839: INFO: Pod "pod-71387113-0400-4e20-8bbd-64e55628b35d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029382075s +Sep 7 08:50:30.854: INFO: Pod "pod-71387113-0400-4e20-8bbd-64e55628b35d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04430783s +Sep 7 08:50:32.869: INFO: Pod "pod-71387113-0400-4e20-8bbd-64e55628b35d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058579044s +STEP: Saw pod success +Sep 7 08:50:32.869: INFO: Pod "pod-71387113-0400-4e20-8bbd-64e55628b35d" satisfied condition "Succeeded or Failed" +Sep 7 08:50:32.871: INFO: Trying to get logs from node 172.31.51.96 pod pod-71387113-0400-4e20-8bbd-64e55628b35d container test-container: +STEP: delete the pod +Sep 7 08:50:32.893: INFO: Waiting for pod pod-71387113-0400-4e20-8bbd-64e55628b35d to disappear +Sep 7 08:50:32.896: INFO: Pod pod-71387113-0400-4e20-8bbd-64e55628b35d no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:50:32.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1991" for this suite. + +• [SLOW TEST:6.150 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":263,"skipped":5143,"failed":0} +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:32.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should support --unix-socket=/path [Conformance] + test/e2e/framework/framework.go:652 +STEP: Starting the proxy +Sep 7 08:50:32.947: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6562 proxy --unix-socket=/tmp/kubectl-proxy-unix2751732393/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:50:33.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6562" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":356,"completed":264,"skipped":5143,"failed":0} +SSSS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] IngressClass API + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:33.054: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename ingressclass +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] IngressClass API + test/e2e/network/ingressclass.go:188 +[It] should support creating IngressClass API operations [Conformance] + test/e2e/framework/framework.go:652 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Sep 7 08:50:33.111: INFO: starting watch +STEP: patching +STEP: updating +Sep 7 08:50:33.123: INFO: waiting for watch events with expected annotations +Sep 7 08:50:33.123: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] IngressClass API + test/e2e/framework/framework.go:188 +Sep 7 08:50:33.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-5872" for this suite. +•{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":356,"completed":265,"skipped":5147,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:33.151: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-4477 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:50:33.209: INFO: Found 0 stateful pods, waiting for 1 +Sep 7 08:50:43.225: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet +Sep 7 08:50:43.257: INFO: Found 1 stateful pods, waiting for 2 +Sep 7 08:50:53.273: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 08:50:53.273: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets +STEP: Delete all of the StatefulSets +STEP: Verify that StatefulSets have been deleted +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Sep 7 08:50:53.296: INFO: Deleting all statefulset in ns statefulset-4477 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:188 +Sep 7 08:50:53.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4477" for this suite. + +• [SLOW TEST:20.219 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":356,"completed":266,"skipped":5158,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:53.370: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 +[It] should update/patch PodDisruptionBudget status [Conformance] + test/e2e/framework/framework.go:652 +STEP: Waiting for the pdb to be processed +STEP: Updating PodDisruptionBudget status +STEP: Waiting for all pods to be running +Sep 7 08:50:53.526: INFO: running pods: 0 < 1 +STEP: locating a running pod +STEP: Waiting for the pdb to be processed +STEP: Patching PodDisruptionBudget status +STEP: Waiting for the pdb to be processed +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:188 +Sep 7 08:50:55.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-8765" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":356,"completed":267,"skipped":5182,"failed":0} +SS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:55.631: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should patch a Namespace [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a Namespace +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:188 +Sep 7 08:50:55.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-5103" for this suite. +STEP: Destroying namespace "nspatchtest-87fd8525-4fe7-45a8-85af-a4a464a26385-4467" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":356,"completed":268,"skipped":5184,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:55.768: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name secret-test-map-ad05cf12-7b8a-4989-bcdc-b66bf8c19dc5 +STEP: Creating a pod to test consume secrets +Sep 7 08:50:55.814: INFO: Waiting up to 5m0s for pod "pod-secrets-9d86e002-c1d5-407c-b597-cc1c46866bc7" in namespace "secrets-2942" to be "Succeeded or Failed" +Sep 7 08:50:55.823: INFO: Pod "pod-secrets-9d86e002-c1d5-407c-b597-cc1c46866bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.455937ms +Sep 7 08:50:57.834: INFO: Pod "pod-secrets-9d86e002-c1d5-407c-b597-cc1c46866bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020001365s +Sep 7 08:50:59.846: INFO: Pod "pod-secrets-9d86e002-c1d5-407c-b597-cc1c46866bc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032121908s +STEP: Saw pod success +Sep 7 08:50:59.846: INFO: Pod "pod-secrets-9d86e002-c1d5-407c-b597-cc1c46866bc7" satisfied condition "Succeeded or Failed" +Sep 7 08:50:59.849: INFO: Trying to get logs from node 172.31.51.96 pod pod-secrets-9d86e002-c1d5-407c-b597-cc1c46866bc7 container secret-volume-test: +STEP: delete the pod +Sep 7 08:50:59.865: INFO: Waiting for pod pod-secrets-9d86e002-c1d5-407c-b597-cc1c46866bc7 to disappear +Sep 7 08:50:59.868: INFO: Pod pod-secrets-9d86e002-c1d5-407c-b597-cc1c46866bc7 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:188 +Sep 7 08:50:59.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2942" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":356,"completed":269,"skipped":5232,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:50:59.875: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a ResourceQuota with best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a best-effort pod +STEP: Ensuring resource quota with best effort scope captures the pod usage +STEP: Ensuring resource quota with not best effort ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a not best-effort pod +STEP: Ensuring resource quota with not best effort scope captures the pod usage +STEP: Ensuring resource quota with best effort scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:188 +Sep 7 08:51:16.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3031" for this suite. + +• [SLOW TEST:16.166 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":356,"completed":270,"skipped":5244,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:16.041: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating Pod +STEP: Reading file content from the nginx-container +Sep 7 08:51:20.114: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4237 PodName:pod-sharedvolume-a1688c52-b1ec-4013-93d9-5a501f6e9932 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 08:51:20.114: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:51:20.115: INFO: ExecWithOptions: Clientset creation +Sep 7 08:51:20.115: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/emptydir-4237/pods/pod-sharedvolume-a1688c52-b1ec-4013-93d9-5a501f6e9932/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) +Sep 7 08:51:20.195: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:51:20.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4237" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":356,"completed":271,"skipped":5264,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:20.207: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 +STEP: create the container to handle the HTTPGet hook request. +Sep 7 08:51:20.259: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:51:22.269: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the pod with lifecycle hook +Sep 7 08:51:22.285: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:51:24.293: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Sep 7 08:51:24.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Sep 7 08:51:24.313: INFO: Pod pod-with-prestop-exec-hook still exists +Sep 7 08:51:26.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Sep 7 08:51:26.319: INFO: Pod pod-with-prestop-exec-hook still exists +Sep 7 08:51:28.314: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Sep 7 08:51:28.325: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:188 +Sep 7 08:51:28.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-5280" for this suite. + +• [SLOW TEST:8.148 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":356,"completed":272,"skipped":5285,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:28.355: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:51:28.389: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:51:28.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-4422" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":356,"completed":273,"skipped":5315,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:28.955: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 +[It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating replication controller my-hostname-basic-240519c3-a5f6-4e65-b4ad-a9f2f38e4fac +Sep 7 08:51:28.995: INFO: Pod name my-hostname-basic-240519c3-a5f6-4e65-b4ad-a9f2f38e4fac: Found 0 pods out of 1 +Sep 7 08:51:34.003: INFO: Pod name my-hostname-basic-240519c3-a5f6-4e65-b4ad-a9f2f38e4fac: Found 1 pods out of 1 +Sep 7 08:51:34.003: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-240519c3-a5f6-4e65-b4ad-a9f2f38e4fac" are running +Sep 7 08:51:34.010: INFO: Pod "my-hostname-basic-240519c3-a5f6-4e65-b4ad-a9f2f38e4fac-c845j" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-07 08:51:29 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-07 08:51:30 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-07 08:51:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-07 08:51:29 +0000 UTC Reason: Message:}]) +Sep 7 08:51:34.010: INFO: Trying to dial the pod +Sep 7 08:51:39.023: INFO: Controller my-hostname-basic-240519c3-a5f6-4e65-b4ad-a9f2f38e4fac: Got expected result from replica 1 [my-hostname-basic-240519c3-a5f6-4e65-b4ad-a9f2f38e4fac-c845j]: "my-hostname-basic-240519c3-a5f6-4e65-b4ad-a9f2f38e4fac-c845j", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:188 +Sep 7 08:51:39.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-9646" for this suite. + +• [SLOW TEST:10.079 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":356,"completed":274,"skipped":5347,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:39.035: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap that has name configmap-test-emptyKey-0916f21f-9ec8-4a86-a701-d5de66c0dd81 +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 08:51:39.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2726" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":356,"completed":275,"skipped":5357,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:39.141: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 08:51:39.262: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a066c762-b186-4e78-9704-8d9a8d3f8bd3" in namespace "projected-6940" to be "Succeeded or Failed" +Sep 7 08:51:39.278: INFO: Pod "downwardapi-volume-a066c762-b186-4e78-9704-8d9a8d3f8bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.073777ms +Sep 7 08:51:41.283: INFO: Pod "downwardapi-volume-a066c762-b186-4e78-9704-8d9a8d3f8bd3": Phase="Running", Reason="", readiness=true. Elapsed: 2.020718622s +Sep 7 08:51:43.317: INFO: Pod "downwardapi-volume-a066c762-b186-4e78-9704-8d9a8d3f8bd3": Phase="Running", Reason="", readiness=false. Elapsed: 4.054601213s +Sep 7 08:51:45.331: INFO: Pod "downwardapi-volume-a066c762-b186-4e78-9704-8d9a8d3f8bd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0690798s +STEP: Saw pod success +Sep 7 08:51:45.331: INFO: Pod "downwardapi-volume-a066c762-b186-4e78-9704-8d9a8d3f8bd3" satisfied condition "Succeeded or Failed" +Sep 7 08:51:45.340: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-a066c762-b186-4e78-9704-8d9a8d3f8bd3 container client-container: +STEP: delete the pod +Sep 7 08:51:45.374: INFO: Waiting for pod downwardapi-volume-a066c762-b186-4e78-9704-8d9a8d3f8bd3 to disappear +Sep 7 08:51:45.378: INFO: Pod downwardapi-volume-a066c762-b186-4e78-9704-8d9a8d3f8bd3 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:188 +Sep 7 08:51:45.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6940" for this suite. + +• [SLOW TEST:6.247 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":276,"skipped":5367,"failed":0} +SSS +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:45.388: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should run through a ConfigMap lifecycle [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a ConfigMap +STEP: fetching the ConfigMap +STEP: patching the ConfigMap +STEP: listing all ConfigMaps in all namespaces with a label selector +STEP: deleting the ConfigMap by collection with a label selector +STEP: listing all ConfigMaps in test namespace +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 08:51:45.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9498" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":356,"completed":277,"skipped":5370,"failed":0} + +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:45.544: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should check if kubectl can dry-run update Pods [Conformance] + test/e2e/framework/framework.go:652 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Sep 7 08:51:45.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-4445 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Sep 7 08:51:45.878: INFO: stderr: "" +Sep 7 08:51:45.878: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run +Sep 7 08:51:45.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-4445 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' +Sep 7 08:51:49.033: INFO: stderr: "" +Sep 7 08:51:49.033: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Sep 7 08:51:49.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-4445 delete pods e2e-test-httpd-pod' +Sep 7 08:51:51.716: INFO: stderr: "" +Sep 7 08:51:51.716: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:51:51.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4445" for this suite. + +• [SLOW TEST:6.190 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl server-side dry-run + test/e2e/kubectl/kubectl.go:927 + should check if kubectl can dry-run update Pods [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":356,"completed":278,"skipped":5370,"failed":0} +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:51.734: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Sep 7 08:51:51.778: INFO: Waiting up to 5m0s for pod "pod-28a19597-6252-494f-8e39-55b8c072b52f" in namespace "emptydir-9222" to be "Succeeded or Failed" +Sep 7 08:51:51.787: INFO: Pod "pod-28a19597-6252-494f-8e39-55b8c072b52f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.918646ms +Sep 7 08:51:53.792: INFO: Pod "pod-28a19597-6252-494f-8e39-55b8c072b52f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014079745s +Sep 7 08:51:55.799: INFO: Pod "pod-28a19597-6252-494f-8e39-55b8c072b52f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021101125s +STEP: Saw pod success +Sep 7 08:51:55.799: INFO: Pod "pod-28a19597-6252-494f-8e39-55b8c072b52f" satisfied condition "Succeeded or Failed" +Sep 7 08:51:55.801: INFO: Trying to get logs from node 172.31.51.96 pod pod-28a19597-6252-494f-8e39-55b8c072b52f container test-container: +STEP: delete the pod +Sep 7 08:51:55.816: INFO: Waiting for pod pod-28a19597-6252-494f-8e39-55b8c072b52f to disappear +Sep 7 08:51:55.819: INFO: Pod pod-28a19597-6252-494f-8e39-55b8c072b52f no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 08:51:55.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9222" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":279,"skipped":5370,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:55.826: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 08:51:55.873: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90c7021d-ebb0-4360-89f2-145b7bf2415a" in namespace "downward-api-6966" to be "Succeeded or Failed" +Sep 7 08:51:55.878: INFO: Pod "downwardapi-volume-90c7021d-ebb0-4360-89f2-145b7bf2415a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.981056ms +Sep 7 08:51:57.887: INFO: Pod "downwardapi-volume-90c7021d-ebb0-4360-89f2-145b7bf2415a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013528895s +Sep 7 08:51:59.898: INFO: Pod "downwardapi-volume-90c7021d-ebb0-4360-89f2-145b7bf2415a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025086728s +STEP: Saw pod success +Sep 7 08:51:59.898: INFO: Pod "downwardapi-volume-90c7021d-ebb0-4360-89f2-145b7bf2415a" satisfied condition "Succeeded or Failed" +Sep 7 08:51:59.901: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-90c7021d-ebb0-4360-89f2-145b7bf2415a container client-container: +STEP: delete the pod +Sep 7 08:51:59.920: INFO: Waiting for pod downwardapi-volume-90c7021d-ebb0-4360-89f2-145b7bf2415a to disappear +Sep 7 08:51:59.926: INFO: Pod downwardapi-volume-90c7021d-ebb0-4360-89f2-145b7bf2415a no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 08:51:59.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6966" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":356,"completed":280,"skipped":5407,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:51:59.935: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Sep 7 08:52:00.003: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5933 940cbd45-c1c4-4ec5-80da-218121f17dfd 25159 0 2022-09-07 08:51:59 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-09-07 08:51:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Sep 7 08:52:00.003: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5933 940cbd45-c1c4-4ec5-80da-218121f17dfd 25160 0 2022-09-07 08:51:59 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-09-07 08:51:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:188 +Sep 7 08:52:00.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-5933" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":356,"completed":281,"skipped":5436,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Ingress API + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:52:00.012: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename ingress +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support creating Ingress API operations [Conformance] + test/e2e/framework/framework.go:652 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Sep 7 08:52:00.115: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Sep 7 08:52:00.126: INFO: starting watch +STEP: patching +STEP: updating +Sep 7 08:52:00.141: INFO: waiting for watch events with expected annotations +Sep 7 08:52:00.141: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] Ingress API + test/e2e/framework/framework.go:188 +Sep 7 08:52:00.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-923" for this suite. +•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":356,"completed":282,"skipped":5468,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:52:00.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should check if v1 is in available api versions [Conformance] + test/e2e/framework/framework.go:652 +STEP: validating api versions +Sep 7 08:52:00.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-8705 api-versions' +Sep 7 08:52:00.312: INFO: stderr: "" +Sep 7 08:52:00.312: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:52:00.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8705" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":356,"completed":283,"skipped":5488,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] server version + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:52:00.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename server-version +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should find the server version [Conformance] + test/e2e/framework/framework.go:652 +STEP: Request ServerVersion +STEP: Confirm major version +Sep 7 08:52:00.356: INFO: Major version: 1 +STEP: Confirm minor version +Sep 7 08:52:00.356: INFO: cleanMinorVersion: 24 +Sep 7 08:52:00.356: INFO: Minor version: 24 +[AfterEach] [sig-api-machinery] server version + test/e2e/framework/framework.go:188 +Sep 7 08:52:00.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-5783" for this suite. +•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":356,"completed":284,"skipped":5521,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:52:00.366: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] Replicaset should have a working scale subresource [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota +Sep 7 08:52:00.432: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the replicaset Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:188 +Sep 7 08:52:02.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-6576" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":356,"completed":285,"skipped":5536,"failed":0} +SSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:52:02.535: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating service in namespace services-2132 +Sep 7 08:52:02.659: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:52:04.667: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Sep 7 08:52:04.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2132 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Sep 7 08:52:04.877: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Sep 7 08:52:04.877: INFO: stdout: "ipvs" +Sep 7 08:52:04.877: INFO: proxyMode: ipvs +Sep 7 08:52:04.890: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Sep 7 08:52:04.898: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-nodeport-timeout in namespace services-2132 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-2132 +I0907 08:52:04.933903 19 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-2132, replica count: 3 +I0907 08:52:07.991603 19 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0907 08:52:10.992428 19 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 08:52:11.011: INFO: Creating new exec pod +Sep 7 08:52:14.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2132 exec execpod-affinityvwj8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' +Sep 7 08:52:14.215: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Sep 7 08:52:14.215: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:52:14.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2132 exec execpod-affinityvwj8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.129.9 80' +Sep 7 08:52:14.417: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.129.9 80\nConnection to 10.68.129.9 80 port [tcp/http] succeeded!\n" +Sep 7 08:52:14.417: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:52:14.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2132 exec execpod-affinityvwj8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.96 30012' +Sep 7 08:52:14.600: INFO: stderr: "+ nc -v -t -w 2 172.31.51.96 30012\nConnection to 172.31.51.96 30012 port [tcp/*] succeeded!\n+ echo hostName\n" +Sep 7 08:52:14.600: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:52:14.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2132 exec execpod-affinityvwj8w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.97 30012' +Sep 7 08:52:14.770: INFO: stderr: "+ nc -v -t -w 2 172.31.51.97 30012\n+ echo hostName\nConnection to 172.31.51.97 30012 port [tcp/*] succeeded!\n" +Sep 7 08:52:14.770: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:52:14.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2132 exec execpod-affinityvwj8w -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.31.51.96:30012/ ; done' +Sep 7 08:52:15.159: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n" +Sep 7 08:52:15.159: INFO: stdout: "\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x\naffinity-nodeport-timeout-z865x" +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Received response from host: affinity-nodeport-timeout-z865x +Sep 7 08:52:15.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2132 exec execpod-affinityvwj8w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.31.51.96:30012/' +Sep 7 08:52:15.364: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n" +Sep 7 08:52:15.364: INFO: stdout: "affinity-nodeport-timeout-z865x" +Sep 7 08:54:25.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2132 exec execpod-affinityvwj8w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.31.51.96:30012/' +Sep 7 08:54:25.633: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.31.51.96:30012/\n" +Sep 7 08:54:25.633: INFO: stdout: "affinity-nodeport-timeout-k69qs" +Sep 7 08:54:25.633: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-2132, will wait for the garbage collector to delete the pods +Sep 7 08:54:25.793: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 12.098644ms +Sep 7 08:54:25.994: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 200.945477ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:54:29.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2132" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:147.045 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":356,"completed":286,"skipped":5542,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:54:29.581: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should delete a job [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: delete a job +STEP: deleting Job.batch foo in namespace job-1490, will wait for the garbage collector to delete the pods +Sep 7 08:54:31.717: INFO: Deleting Job.batch foo took: 6.069891ms +Sep 7 08:54:31.818: INFO: Terminating Job.batch foo pods took: 100.569759ms +STEP: Ensuring job was deleted +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:188 +Sep 7 08:55:04.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-1490" for this suite. + +• [SLOW TEST:35.054 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should delete a job [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":356,"completed":287,"skipped":5558,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:55:04.635: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicaSet +STEP: Ensuring resource quota status captures replicaset creation +STEP: Deleting a ReplicaSet +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:188 +Sep 7 08:55:15.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-249" for this suite. + +• [SLOW TEST:11.164 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":356,"completed":288,"skipped":5591,"failed":0} +SS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:55:15.799: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Performing setup for networking test in namespace pod-network-test-9811 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Sep 7 08:55:15.835: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Sep 7 08:55:15.866: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:55:17.877: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:55:19.872: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:55:21.873: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:55:23.883: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:55:25.876: INFO: The status of Pod netserver-0 is Running (Ready = false) +Sep 7 08:55:27.874: INFO: The status of Pod netserver-0 is Running (Ready = true) +Sep 7 08:55:27.878: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Sep 7 08:55:29.936: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Sep 7 08:55:29.936: INFO: Going to poll 172.20.75.55 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Sep 7 08:55:29.937: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.20.75.55:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9811 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 08:55:29.937: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:55:29.938: INFO: ExecWithOptions: Clientset creation +Sep 7 08:55:29.938: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/pod-network-test-9811/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.20.75.55%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Sep 7 08:55:30.041: INFO: Found all 1 expected endpoints: [netserver-0] +Sep 7 08:55:30.041: INFO: Going to poll 172.20.97.90 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Sep 7 08:55:30.045: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.20.97.90:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9811 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 08:55:30.045: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:55:30.047: INFO: ExecWithOptions: Clientset creation +Sep 7 08:55:30.047: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/pod-network-test-9811/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F172.20.97.90%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Sep 7 08:55:30.128: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:188 +Sep 7 08:55:30.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-9811" for this suite. + +• [SLOW TEST:14.340 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":289,"skipped":5593,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Pods Extended + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:55:30.140: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Pods Set QOS Class + test/e2e/node/pods.go:152 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [sig-node] Pods Extended + test/e2e/framework/framework.go:188 +Sep 7 08:55:30.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1711" for this suite. +•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":356,"completed":290,"skipped":5610,"failed":0} +SSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:55:30.231: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Sep 7 08:55:30.819: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Sep 7 08:55:33.851: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:55:33.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Creating a v1 custom resource +STEP: Create a v2 custom resource +STEP: List CRs in v1 +STEP: List CRs in v2 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:55:37.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-7038" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 + +• [SLOW TEST:7.432 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":356,"completed":291,"skipped":5613,"failed":0} +SSS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:55:37.662: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename podtemplate +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should delete a collection of pod templates [Conformance] + test/e2e/framework/framework.go:652 +STEP: Create set of pod templates +Sep 7 08:55:37.779: INFO: created test-podtemplate-1 +Sep 7 08:55:37.782: INFO: created test-podtemplate-2 +Sep 7 08:55:37.791: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace +STEP: delete collection of pod templates +Sep 7 08:55:37.794: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity +Sep 7 08:55:37.810: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:188 +Sep 7 08:55:37.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-1505" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":356,"completed":292,"skipped":5616,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:55:37.819: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:55:37.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5137 create -f -' +Sep 7 08:55:40.195: INFO: stderr: "" +Sep 7 08:55:40.195: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Sep 7 08:55:40.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5137 create -f -' +Sep 7 08:55:42.306: INFO: stderr: "" +Sep 7 08:55:42.306: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Sep 7 08:55:43.315: INFO: Selector matched 1 pods for map[app:agnhost] +Sep 7 08:55:43.315: INFO: Found 1 / 1 +Sep 7 08:55:43.315: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Sep 7 08:55:43.318: INFO: Selector matched 1 pods for map[app:agnhost] +Sep 7 08:55:43.318: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Sep 7 08:55:43.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5137 describe pod agnhost-primary-9mjpd' +Sep 7 08:55:43.428: INFO: stderr: "" +Sep 7 08:55:43.428: INFO: stdout: "Name: agnhost-primary-9mjpd\nNamespace: kubectl-5137\nPriority: 0\nNode: 172.31.51.96/172.31.51.96\nStart Time: Wed, 07 Sep 2022 08:55:40 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 172.20.75.58\nIPs:\n IP: 172.20.75.58\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://84cfde7d158fadf8514c27d37b87120c071d17b79b2565f2fa7221f57de065c9\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 07 Sep 2022 08:55:42 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vn9px (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-vn9px:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-5137/agnhost-primary-9mjpd to 172.31.51.96\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" +Sep 7 08:55:43.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5137 describe rc agnhost-primary' +Sep 7 08:55:43.563: INFO: stderr: "" +Sep 7 08:55:43.563: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-5137\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-9mjpd\n" +Sep 7 08:55:43.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5137 describe service agnhost-primary' +Sep 7 08:55:43.682: INFO: stderr: "" +Sep 7 08:55:43.682: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-5137\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.68.255.12\nIPs: 10.68.255.12\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 172.20.75.58:6379\nSession Affinity: None\nEvents: \n" +Sep 7 08:55:43.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5137 describe node 172.31.51.96' +Sep 7 08:55:43.881: INFO: stderr: "" +Sep 7 08:55:43.881: INFO: stdout: "Name: 172.31.51.96\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=172.31.51.96\n kubernetes.io/os=linux\n kubernetes.io/role=master\nAnnotations: node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 07 Sep 2022 07:26:18 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: 172.31.51.96\n AcquireTime: \n RenewTime: Wed, 07 Sep 2022 08:55:34 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Wed, 07 Sep 2022 07:27:26 +0000 Wed, 07 Sep 2022 07:27:26 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Wed, 07 Sep 2022 08:55:01 +0000 Wed, 07 Sep 2022 07:26:18 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 07 Sep 2022 08:55:01 +0000 Wed, 07 Sep 2022 07:26:18 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 07 Sep 2022 08:55:01 +0000 Wed, 07 Sep 2022 07:26:18 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 07 Sep 2022 08:55:01 +0000 Wed, 07 Sep 2022 07:27:29 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.31.51.96\n Hostname: 172.31.51.96\nCapacity:\n cpu: 2\n ephemeral-storage: 41152812Ki\n example.com/fakecpu: 1k\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8008928Ki\n pods: 110\nAllocatable:\n cpu: 2\n ephemeral-storage: 37926431477\n example.com/fakecpu: 1k\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7701728Ki\n pods: 110\nSystem Info:\n Machine ID: 20220824141109585711773848340569\n System UUID: 35742018-2CB5-40C1-95BA-91D85F6BEAFE\n Boot ID: 980ed38d-e810-430b-96dc-598f68f0f837\n Kernel Version: 3.10.0-1160.76.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.4\n Kubelet Version: v1.24.2\n Kube-Proxy Version: v1.24.2\nPodCIDR: 172.20.0.0/24\nPodCIDRs: 172.20.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-g8tpr 250m (12%) 0 (0%) 0 (0%) 0 (0%) 88m\n kube-system node-local-dns-8rwpt 25m (1%) 0 (0%) 5Mi (0%) 0 (0%) 88m\n kubectl-5137 agnhost-primary-9mjpd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3s\n pods-1711 pod-qos-class-0dce1614-f710-401d-8bd7-24c179d190dd 100m (5%) 100m (5%) 100Mi (1%) 100Mi (1%) 13s\n sonobuoy sonobuoy 0 (0%) 0 (0%) 0 (0%) 0 (0%) 76m\n sonobuoy sonobuoy-e2e-job-2f855b96e04a42ee 0 (0%) 0 (0%) 0 (0%) 0 (0%) 76m\n sonobuoy sonobuoy-systemd-logs-daemon-set-1241b5e1ea9447a9-kstch 0 (0%) 0 (0%) 0 (0%) 0 (0%) 76m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 375m (18%) 100m (5%)\n memory 105Mi (1%) 100Mi (1%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n example.com/fakecpu 0 0\nEvents: \n" +Sep 7 08:55:43.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-5137 describe namespace kubectl-5137' +Sep 7 08:55:44.000: INFO: stderr: "" +Sep 7 08:55:44.000: INFO: stdout: "Name: kubectl-5137\nLabels: e2e-framework=kubectl\n e2e-run=7c49f96c-a108-4029-8b40-c680af004a9e\n kubernetes.io/metadata.name=kubectl-5137\n pod-security.kubernetes.io/enforce=baseline\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 08:55:44.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5137" for this suite. + +• [SLOW TEST:6.194 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl describe + test/e2e/kubectl/kubectl.go:1110 + should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":356,"completed":293,"skipped":5626,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:55:44.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 +[It] should invoke init containers on a RestartNever pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pod +Sep 7 08:55:44.052: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:188 +Sep 7 08:55:49.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-820" for this suite. + +• [SLOW TEST:5.825 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartNever pod [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":356,"completed":294,"skipped":5641,"failed":0} +S +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:55:49.838: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should list and delete a collection of ReplicaSets [Conformance] + test/e2e/framework/framework.go:652 +STEP: Create a ReplicaSet +STEP: Verify that the required pods have come up +Sep 7 08:55:49.934: INFO: Pod name sample-pod: Found 0 pods out of 3 +Sep 7 08:55:54.944: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running +Sep 7 08:55:54.953: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets +STEP: DeleteCollection of the ReplicaSets +STEP: After DeleteCollection verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:188 +Sep 7 08:55:54.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3682" for this suite. + +• [SLOW TEST:5.157 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should list and delete a collection of ReplicaSets [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":356,"completed":295,"skipped":5642,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:55:54.995: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide DNS for pods for Hostname [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3268.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3268.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3268.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3268.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Sep 7 08:55:59.249: INFO: DNS probes using dns-3268/dns-test-2de83cc3-55d2-4771-ae56-90c6564e2742 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:188 +Sep 7 08:55:59.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-3268" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [Conformance]","total":356,"completed":296,"skipped":5663,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:55:59.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a ReplaceConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring the job is replaced with a new one +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:188 +Sep 7 08:57:01.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-1727" for this suite. + +• [SLOW TEST:62.135 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":356,"completed":297,"skipped":5673,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:57:01.419: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for multiple CRDs of different groups [Conformance] + test/e2e/framework/framework.go:652 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation +Sep 7 08:57:01.519: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 08:57:05.151: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:57:19.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1221" for this suite. + +• [SLOW TEST:17.878 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of different groups [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":356,"completed":298,"skipped":5679,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:57:19.298: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name projected-configmap-test-volume-534d644e-b418-4630-8ed6-82bd53cfe2ac +STEP: Creating a pod to test consume configMaps +Sep 7 08:57:19.341: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4ceba683-6521-462c-b65a-dbf6bae49005" in namespace "projected-1424" to be "Succeeded or Failed" +Sep 7 08:57:19.345: INFO: Pod "pod-projected-configmaps-4ceba683-6521-462c-b65a-dbf6bae49005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.937011ms +Sep 7 08:57:21.356: INFO: Pod "pod-projected-configmaps-4ceba683-6521-462c-b65a-dbf6bae49005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014632538s +Sep 7 08:57:23.366: INFO: Pod "pod-projected-configmaps-4ceba683-6521-462c-b65a-dbf6bae49005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024789852s +STEP: Saw pod success +Sep 7 08:57:23.366: INFO: Pod "pod-projected-configmaps-4ceba683-6521-462c-b65a-dbf6bae49005" satisfied condition "Succeeded or Failed" +Sep 7 08:57:23.369: INFO: Trying to get logs from node 172.31.51.97 pod pod-projected-configmaps-4ceba683-6521-462c-b65a-dbf6bae49005 container agnhost-container: +STEP: delete the pod +Sep 7 08:57:23.396: INFO: Waiting for pod pod-projected-configmaps-4ceba683-6521-462c-b65a-dbf6bae49005 to disappear +Sep 7 08:57:23.399: INFO: Pod pod-projected-configmaps-4ceba683-6521-462c-b65a-dbf6bae49005 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:188 +Sep 7 08:57:23.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1424" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":356,"completed":299,"skipped":5715,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:57:23.408: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Sep 7 08:58:03.825: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Sep 7 08:58:03.825: INFO: Deleting pod "simpletest.rc-2hrqh" in namespace "gc-7401" +W0907 08:58:03.825128 19 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Sep 7 08:58:03.843: INFO: Deleting pod "simpletest.rc-2llh7" in namespace "gc-7401" +Sep 7 08:58:03.887: INFO: Deleting pod "simpletest.rc-2n5ns" in namespace "gc-7401" +Sep 7 08:58:03.904: INFO: Deleting pod "simpletest.rc-2pc6j" in namespace "gc-7401" +Sep 7 08:58:03.928: INFO: Deleting pod "simpletest.rc-2shkm" in namespace "gc-7401" +Sep 7 08:58:03.971: INFO: Deleting pod "simpletest.rc-42mms" in namespace "gc-7401" +Sep 7 08:58:04.015: INFO: Deleting pod "simpletest.rc-4jmbd" in namespace "gc-7401" +Sep 7 08:58:04.068: INFO: Deleting pod "simpletest.rc-4n4dk" in namespace "gc-7401" +Sep 7 08:58:04.118: INFO: Deleting pod "simpletest.rc-4vsrz" in namespace "gc-7401" +Sep 7 08:58:04.151: INFO: Deleting pod "simpletest.rc-5978c" in namespace "gc-7401" +Sep 7 08:58:04.183: INFO: Deleting pod "simpletest.rc-5cxj6" in namespace "gc-7401" +Sep 7 08:58:04.231: INFO: Deleting pod "simpletest.rc-5f4gp" in namespace "gc-7401" +Sep 7 08:58:04.319: INFO: Deleting pod "simpletest.rc-5gkgv" in namespace "gc-7401" +Sep 7 08:58:04.386: INFO: Deleting pod "simpletest.rc-5jrjl" in namespace "gc-7401" +Sep 7 08:58:04.445: INFO: Deleting pod "simpletest.rc-5tmcl" in namespace "gc-7401" +Sep 7 08:58:04.464: INFO: Deleting pod "simpletest.rc-62msd" in namespace "gc-7401" +Sep 7 08:58:04.552: INFO: Deleting pod "simpletest.rc-6798g" in namespace "gc-7401" +Sep 7 08:58:04.584: INFO: Deleting pod "simpletest.rc-6kqvl" in namespace "gc-7401" +Sep 7 08:58:04.632: INFO: Deleting pod "simpletest.rc-6lm7w" in namespace "gc-7401" +Sep 7 08:58:04.808: INFO: Deleting pod "simpletest.rc-77xnb" in namespace "gc-7401" +Sep 7 08:58:04.929: INFO: Deleting pod "simpletest.rc-78lrc" in namespace "gc-7401" +Sep 7 08:58:04.996: INFO: Deleting pod "simpletest.rc-7f7zs" in namespace "gc-7401" +Sep 7 08:58:05.034: INFO: Deleting pod "simpletest.rc-7gts4" in namespace "gc-7401" +Sep 7 08:58:05.069: INFO: Deleting pod "simpletest.rc-7hvpt" in namespace "gc-7401" +Sep 7 08:58:05.097: INFO: Deleting pod "simpletest.rc-7t758" in namespace "gc-7401" +Sep 7 08:58:05.123: INFO: Deleting pod "simpletest.rc-7x2pr" in namespace "gc-7401" +Sep 7 08:58:05.176: INFO: Deleting pod "simpletest.rc-844wf" in namespace "gc-7401" +Sep 7 08:58:05.241: INFO: Deleting pod "simpletest.rc-84cdn" in namespace "gc-7401" +Sep 7 08:58:05.339: INFO: Deleting pod "simpletest.rc-8btpk" in namespace "gc-7401" +Sep 7 08:58:05.377: INFO: Deleting pod "simpletest.rc-8k2gp" in namespace "gc-7401" +Sep 7 08:58:05.393: INFO: Deleting pod "simpletest.rc-8l7zw" in namespace "gc-7401" +Sep 7 08:58:05.451: INFO: Deleting pod "simpletest.rc-9bgxb" in namespace "gc-7401" +Sep 7 08:58:05.566: INFO: Deleting pod "simpletest.rc-bh7wq" in namespace "gc-7401" +Sep 7 08:58:05.630: INFO: Deleting pod "simpletest.rc-bhfmj" in namespace "gc-7401" +Sep 7 08:58:05.689: INFO: Deleting pod "simpletest.rc-bl5vm" in namespace "gc-7401" +Sep 7 08:58:05.747: INFO: Deleting pod "simpletest.rc-bvqwg" in namespace "gc-7401" +Sep 7 08:58:05.828: INFO: Deleting pod "simpletest.rc-bzp7h" in namespace "gc-7401" +Sep 7 08:58:05.867: INFO: Deleting pod "simpletest.rc-c4h44" in namespace "gc-7401" +Sep 7 08:58:05.913: INFO: Deleting pod "simpletest.rc-ch8hr" in namespace "gc-7401" +Sep 7 08:58:05.928: INFO: Deleting pod "simpletest.rc-cjpgq" in namespace "gc-7401" +Sep 7 08:58:05.974: INFO: Deleting pod "simpletest.rc-cwskl" in namespace "gc-7401" +Sep 7 08:58:05.993: INFO: Deleting pod "simpletest.rc-dnszc" in namespace "gc-7401" +Sep 7 08:58:06.035: INFO: Deleting pod "simpletest.rc-dz9bz" in namespace "gc-7401" +Sep 7 08:58:06.059: INFO: Deleting pod "simpletest.rc-f5wkm" in namespace "gc-7401" +Sep 7 08:58:06.110: INFO: Deleting pod "simpletest.rc-flj8n" in namespace "gc-7401" +Sep 7 08:58:06.146: INFO: Deleting pod "simpletest.rc-gk98x" in namespace "gc-7401" +Sep 7 08:58:06.258: INFO: Deleting pod "simpletest.rc-h5kn6" in namespace "gc-7401" +Sep 7 08:58:06.309: INFO: Deleting pod "simpletest.rc-hxd2b" in namespace "gc-7401" +Sep 7 08:58:06.319: INFO: Deleting pod "simpletest.rc-j4fqh" in namespace "gc-7401" +Sep 7 08:58:06.347: INFO: Deleting pod "simpletest.rc-j55cs" in namespace "gc-7401" +Sep 7 08:58:06.364: INFO: Deleting pod "simpletest.rc-jhxhf" in namespace "gc-7401" +Sep 7 08:58:06.402: INFO: Deleting pod "simpletest.rc-jl8b6" in namespace "gc-7401" +Sep 7 08:58:06.433: INFO: Deleting pod "simpletest.rc-jl8mj" in namespace "gc-7401" +Sep 7 08:58:06.484: INFO: Deleting pod "simpletest.rc-k22ls" in namespace "gc-7401" +Sep 7 08:58:06.686: INFO: Deleting pod "simpletest.rc-k4mkh" in namespace "gc-7401" +Sep 7 08:58:06.879: INFO: Deleting pod "simpletest.rc-lfwgr" in namespace "gc-7401" +Sep 7 08:58:06.955: INFO: Deleting pod "simpletest.rc-lgqk8" in namespace "gc-7401" +Sep 7 08:58:07.166: INFO: Deleting pod "simpletest.rc-lswlb" in namespace "gc-7401" +Sep 7 08:58:07.202: INFO: Deleting pod "simpletest.rc-m279n" in namespace "gc-7401" +Sep 7 08:58:07.399: INFO: Deleting pod "simpletest.rc-msn7v" in namespace "gc-7401" +Sep 7 08:58:07.482: INFO: Deleting pod "simpletest.rc-n8bv8" in namespace "gc-7401" +Sep 7 08:58:07.545: INFO: Deleting pod "simpletest.rc-n8s8j" in namespace "gc-7401" +Sep 7 08:58:07.786: INFO: Deleting pod "simpletest.rc-nbfrd" in namespace "gc-7401" +Sep 7 08:58:07.910: INFO: Deleting pod "simpletest.rc-nhvqr" in namespace "gc-7401" +Sep 7 08:58:07.942: INFO: Deleting pod "simpletest.rc-nk99c" in namespace "gc-7401" +Sep 7 08:58:08.079: INFO: Deleting pod "simpletest.rc-nkldk" in namespace "gc-7401" +Sep 7 08:58:08.116: INFO: Deleting pod "simpletest.rc-pffk7" in namespace "gc-7401" +Sep 7 08:58:08.135: INFO: Deleting pod "simpletest.rc-pgtsf" in namespace "gc-7401" +Sep 7 08:58:08.154: INFO: Deleting pod "simpletest.rc-ppzmn" in namespace "gc-7401" +Sep 7 08:58:08.169: INFO: Deleting pod "simpletest.rc-pvrhf" in namespace "gc-7401" +Sep 7 08:58:08.191: INFO: Deleting pod "simpletest.rc-q2h7h" in namespace "gc-7401" +Sep 7 08:58:08.203: INFO: Deleting pod "simpletest.rc-q676n" in namespace "gc-7401" +Sep 7 08:58:08.217: INFO: Deleting pod "simpletest.rc-qh8sq" in namespace "gc-7401" +Sep 7 08:58:08.234: INFO: Deleting pod "simpletest.rc-qjhcr" in namespace "gc-7401" +Sep 7 08:58:08.255: INFO: Deleting pod "simpletest.rc-qjm8z" in namespace "gc-7401" +Sep 7 08:58:08.267: INFO: Deleting pod "simpletest.rc-qwbwg" in namespace "gc-7401" +Sep 7 08:58:08.280: INFO: Deleting pod "simpletest.rc-qwtqd" in namespace "gc-7401" +Sep 7 08:58:08.292: INFO: Deleting pod "simpletest.rc-r7c6c" in namespace "gc-7401" +Sep 7 08:58:08.304: INFO: Deleting pod "simpletest.rc-rflmp" in namespace "gc-7401" +Sep 7 08:58:08.313: INFO: Deleting pod "simpletest.rc-rhdwr" in namespace "gc-7401" +Sep 7 08:58:08.323: INFO: Deleting pod "simpletest.rc-rk2pl" in namespace "gc-7401" +Sep 7 08:58:08.335: INFO: Deleting pod "simpletest.rc-rn4kw" in namespace "gc-7401" +Sep 7 08:58:08.344: INFO: Deleting pod "simpletest.rc-s2gst" in namespace "gc-7401" +Sep 7 08:58:08.353: INFO: Deleting pod "simpletest.rc-s9wvj" in namespace "gc-7401" +Sep 7 08:58:08.368: INFO: Deleting pod "simpletest.rc-swz5d" in namespace "gc-7401" +Sep 7 08:58:08.379: INFO: Deleting pod "simpletest.rc-tchd2" in namespace "gc-7401" +Sep 7 08:58:08.389: INFO: Deleting pod "simpletest.rc-td8kv" in namespace "gc-7401" +Sep 7 08:58:08.396: INFO: Deleting pod "simpletest.rc-tk8cc" in namespace "gc-7401" +Sep 7 08:58:08.416: INFO: Deleting pod "simpletest.rc-tl2lf" in namespace "gc-7401" +Sep 7 08:58:08.424: INFO: Deleting pod "simpletest.rc-tm69j" in namespace "gc-7401" +Sep 7 08:58:08.443: INFO: Deleting pod "simpletest.rc-v92j6" in namespace "gc-7401" +Sep 7 08:58:08.460: INFO: Deleting pod "simpletest.rc-vj9tr" in namespace "gc-7401" +Sep 7 08:58:08.471: INFO: Deleting pod "simpletest.rc-vsvkb" in namespace "gc-7401" +Sep 7 08:58:08.478: INFO: Deleting pod "simpletest.rc-vtj6d" in namespace "gc-7401" +Sep 7 08:58:08.499: INFO: Deleting pod "simpletest.rc-vzbd6" in namespace "gc-7401" +Sep 7 08:58:08.513: INFO: Deleting pod "simpletest.rc-wb84n" in namespace "gc-7401" +Sep 7 08:58:08.533: INFO: Deleting pod "simpletest.rc-xdk8m" in namespace "gc-7401" +Sep 7 08:58:08.561: INFO: Deleting pod "simpletest.rc-xdtn4" in namespace "gc-7401" +Sep 7 08:58:08.580: INFO: Deleting pod "simpletest.rc-z7sbm" in namespace "gc-7401" +Sep 7 08:58:08.595: INFO: Deleting pod "simpletest.rc-zq8ts" in namespace "gc-7401" +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:188 +Sep 7 08:58:08.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-7401" for this suite. + +• [SLOW TEST:45.248 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should orphan pods created by rc if delete options say so [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":356,"completed":300,"skipped":5754,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:58:08.656: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name projected-secret-test-05c7e12f-5f86-43f2-ae3b-3c2416e223f6 +STEP: Creating a pod to test consume secrets +Sep 7 08:58:08.752: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a" in namespace "projected-5722" to be "Succeeded or Failed" +Sep 7 08:58:08.783: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.479073ms +Sep 7 08:58:10.806: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054147196s +Sep 7 08:58:12.835: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08226955s +Sep 7 08:58:14.875: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122509276s +Sep 7 08:58:16.943: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19112219s +Sep 7 08:58:18.990: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.238069618s +Sep 7 08:58:21.007: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.25486925s +Sep 7 08:58:23.031: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.278589449s +Sep 7 08:58:25.039: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.287016238s +Sep 7 08:58:27.105: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.352655873s +Sep 7 08:58:29.121: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.368953261s +Sep 7 08:58:31.126: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=true. Elapsed: 22.373659957s +Sep 7 08:58:33.167: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=false. Elapsed: 24.415024942s +Sep 7 08:58:35.176: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=false. Elapsed: 26.424017667s +Sep 7 08:58:37.188: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=false. Elapsed: 28.435812353s +Sep 7 08:58:39.208: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=false. Elapsed: 30.455870228s +Sep 7 08:58:41.216: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=false. Elapsed: 32.464042675s +Sep 7 08:58:43.255: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=false. Elapsed: 34.503000735s +Sep 7 08:58:45.267: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=false. Elapsed: 36.515045316s +Sep 7 08:58:47.281: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=false. Elapsed: 38.529094112s +Sep 7 08:58:49.297: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=false. Elapsed: 40.54505015s +Sep 7 08:58:51.360: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Running", Reason="", readiness=false. Elapsed: 42.607991996s +Sep 7 08:58:53.373: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.620936571s +STEP: Saw pod success +Sep 7 08:58:53.373: INFO: Pod "pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a" satisfied condition "Succeeded or Failed" +Sep 7 08:58:53.377: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a container secret-volume-test: +STEP: delete the pod +Sep 7 08:58:53.418: INFO: Waiting for pod pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a to disappear +Sep 7 08:58:53.421: INFO: Pod pod-projected-secrets-ba87704d-1e20-4c4c-8df3-872a211d3f6a no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:188 +Sep 7 08:58:53.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5722" for this suite. + +• [SLOW TEST:44.785 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":356,"completed":301,"skipped":5762,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:58:53.441: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating projection with secret that has name projected-secret-test-682290a7-799c-4d42-ba42-034cb48367bd +STEP: Creating a pod to test consume secrets +Sep 7 08:58:53.521: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c2f16884-e5e0-41ae-80b3-7099228c122a" in namespace "projected-9328" to be "Succeeded or Failed" +Sep 7 08:58:53.544: INFO: Pod "pod-projected-secrets-c2f16884-e5e0-41ae-80b3-7099228c122a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.152584ms +Sep 7 08:58:55.554: INFO: Pod "pod-projected-secrets-c2f16884-e5e0-41ae-80b3-7099228c122a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033119947s +Sep 7 08:58:57.565: INFO: Pod "pod-projected-secrets-c2f16884-e5e0-41ae-80b3-7099228c122a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044145568s +STEP: Saw pod success +Sep 7 08:58:57.565: INFO: Pod "pod-projected-secrets-c2f16884-e5e0-41ae-80b3-7099228c122a" satisfied condition "Succeeded or Failed" +Sep 7 08:58:57.568: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-secrets-c2f16884-e5e0-41ae-80b3-7099228c122a container projected-secret-volume-test: +STEP: delete the pod +Sep 7 08:58:57.618: INFO: Waiting for pod pod-projected-secrets-c2f16884-e5e0-41ae-80b3-7099228c122a to disappear +Sep 7 08:58:57.622: INFO: Pod pod-projected-secrets-c2f16884-e5e0-41ae-80b3-7099228c122a no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:188 +Sep 7 08:58:57.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9328" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":302,"skipped":5795,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:58:57.629: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating service in namespace services-9859 +STEP: creating service affinity-nodeport-transition in namespace services-9859 +STEP: creating replication controller affinity-nodeport-transition in namespace services-9859 +I0907 08:58:57.681756 19 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-9859, replica count: 3 +I0907 08:59:00.734571 19 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Sep 7 08:59:00.761: INFO: Creating new exec pod +Sep 7 08:59:03.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-9859 exec execpod-affinityzk2rw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' +Sep 7 08:59:04.041: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Sep 7 08:59:04.041: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:59:04.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-9859 exec execpod-affinityzk2rw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.215.30 80' +Sep 7 08:59:04.242: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.215.30 80\nConnection to 10.68.215.30 80 port [tcp/http] succeeded!\n" +Sep 7 08:59:04.242: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:59:04.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-9859 exec execpod-affinityzk2rw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.96 31495' +Sep 7 08:59:04.424: INFO: stderr: "+ nc -v -t -w 2 172.31.51.96 31495\n+ echo hostName\nConnection to 172.31.51.96 31495 port [tcp/*] succeeded!\n" +Sep 7 08:59:04.424: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:59:04.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-9859 exec execpod-affinityzk2rw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.31.51.97 31495' +Sep 7 08:59:04.790: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.31.51.97 31495\nConnection to 172.31.51.97 31495 port [tcp/*] succeeded!\n" +Sep 7 08:59:04.790: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 08:59:04.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-9859 exec execpod-affinityzk2rw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.31.51.96:31495/ ; done' +Sep 7 08:59:05.323: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n" +Sep 7 08:59:05.323: INFO: stdout: "\naffinity-nodeport-transition-cgc2b\naffinity-nodeport-transition-vrb2x\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-cgc2b\naffinity-nodeport-transition-vrb2x\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-cgc2b\naffinity-nodeport-transition-vrb2x\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-cgc2b\naffinity-nodeport-transition-vrb2x\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-cgc2b\naffinity-nodeport-transition-vrb2x\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-cgc2b" +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-cgc2b +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-vrb2x +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-cgc2b +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-vrb2x +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-cgc2b +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-vrb2x +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-cgc2b +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-vrb2x +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-cgc2b +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-vrb2x +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.323: INFO: Received response from host: affinity-nodeport-transition-cgc2b +Sep 7 08:59:05.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-9859 exec execpod-affinityzk2rw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.31.51.96:31495/ ; done' +Sep 7 08:59:05.851: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.31.51.96:31495/\n" +Sep 7 08:59:05.851: INFO: stdout: "\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb\naffinity-nodeport-transition-jwjnb" +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Received response from host: affinity-nodeport-transition-jwjnb +Sep 7 08:59:05.851: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9859, will wait for the garbage collector to delete the pods +Sep 7 08:59:05.924: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.652526ms +Sep 7 08:59:06.026: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 101.95375ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 08:59:08.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9859" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:11.034 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":356,"completed":303,"skipped":5823,"failed":0} +S +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:59:08.662: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating the pod +Sep 7 08:59:08.884: INFO: The status of Pod annotationupdate024a804d-b51c-4911-b351-2be7d7e10783 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 08:59:10.903: INFO: The status of Pod annotationupdate024a804d-b51c-4911-b351-2be7d7e10783 is Running (Ready = true) +Sep 7 08:59:11.426: INFO: Successfully updated pod "annotationupdate024a804d-b51c-4911-b351-2be7d7e10783" +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 08:59:13.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8120" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":356,"completed":304,"skipped":5824,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:59:13.458: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:59:13.499: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:59:14.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-3522" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":356,"completed":305,"skipped":5845,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:59:14.574: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward api env vars +Sep 7 08:59:14.694: INFO: Waiting up to 5m0s for pod "downward-api-ca665290-0071-46cf-be2c-36f7d204bef4" in namespace "downward-api-8922" to be "Succeeded or Failed" +Sep 7 08:59:14.719: INFO: Pod "downward-api-ca665290-0071-46cf-be2c-36f7d204bef4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.458186ms +Sep 7 08:59:16.725: INFO: Pod "downward-api-ca665290-0071-46cf-be2c-36f7d204bef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030702921s +Sep 7 08:59:18.762: INFO: Pod "downward-api-ca665290-0071-46cf-be2c-36f7d204bef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068200968s +STEP: Saw pod success +Sep 7 08:59:18.762: INFO: Pod "downward-api-ca665290-0071-46cf-be2c-36f7d204bef4" satisfied condition "Succeeded or Failed" +Sep 7 08:59:18.774: INFO: Trying to get logs from node 172.31.51.96 pod downward-api-ca665290-0071-46cf-be2c-36f7d204bef4 container dapi-container: +STEP: delete the pod +Sep 7 08:59:18.857: INFO: Waiting for pod downward-api-ca665290-0071-46cf-be2c-36f7d204bef4 to disappear +Sep 7 08:59:18.882: INFO: Pod downward-api-ca665290-0071-46cf-be2c-36f7d204bef4 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:188 +Sep 7 08:59:18.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8922" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":356,"completed":306,"skipped":5900,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:59:18.947: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 08:59:19.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:188 +Sep 7 08:59:22.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-7525" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":356,"completed":307,"skipped":5907,"failed":0} +SSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:59:22.543: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 08:59:22.643: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07e4857c-ed4f-406f-96f5-c251013e84bf" in namespace "downward-api-1837" to be "Succeeded or Failed" +Sep 7 08:59:22.651: INFO: Pod "downwardapi-volume-07e4857c-ed4f-406f-96f5-c251013e84bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.624052ms +Sep 7 08:59:24.665: INFO: Pod "downwardapi-volume-07e4857c-ed4f-406f-96f5-c251013e84bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022715667s +Sep 7 08:59:26.695: INFO: Pod "downwardapi-volume-07e4857c-ed4f-406f-96f5-c251013e84bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052777146s +Sep 7 08:59:28.734: INFO: Pod "downwardapi-volume-07e4857c-ed4f-406f-96f5-c251013e84bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09100573s +STEP: Saw pod success +Sep 7 08:59:28.734: INFO: Pod "downwardapi-volume-07e4857c-ed4f-406f-96f5-c251013e84bf" satisfied condition "Succeeded or Failed" +Sep 7 08:59:28.764: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-07e4857c-ed4f-406f-96f5-c251013e84bf container client-container: +STEP: delete the pod +Sep 7 08:59:28.839: INFO: Waiting for pod downwardapi-volume-07e4857c-ed4f-406f-96f5-c251013e84bf to disappear +Sep 7 08:59:28.860: INFO: Pod downwardapi-volume-07e4857c-ed4f-406f-96f5-c251013e84bf no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 08:59:28.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1837" for this suite. + +• [SLOW TEST:6.325 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":356,"completed":308,"skipped":5910,"failed":0} +SS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 08:59:28.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pod with failed condition +STEP: updating the pod +Sep 7 09:01:29.468: INFO: Successfully updated pod "var-expansion-107010e8-970e-442d-8bec-aa003425dc5d" +STEP: waiting for pod running +STEP: deleting the pod gracefully +Sep 7 09:01:31.477: INFO: Deleting pod "var-expansion-107010e8-970e-442d-8bec-aa003425dc5d" in namespace "var-expansion-4068" +Sep 7 09:01:31.490: INFO: Wait up to 5m0s for pod "var-expansion-107010e8-970e-442d-8bec-aa003425dc5d" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:188 +Sep 7 09:02:03.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4068" for this suite. + +• [SLOW TEST:154.650 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":356,"completed":309,"skipped":5912,"failed":0} +SSSSSS +------------------------------ +[sig-apps] CronJob + should support CronJob API operations [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:02:03.518: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support CronJob API operations [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a cronjob +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Sep 7 09:02:03.568: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Sep 7 09:02:03.572: INFO: starting watch +STEP: patching +STEP: updating +Sep 7 09:02:03.588: INFO: waiting for watch events with expected annotations +Sep 7 09:02:03.588: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:188 +Sep 7 09:02:03.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-5686" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":356,"completed":310,"skipped":5918,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:02:03.627: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/framework/framework.go:652 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:188 +Sep 7 09:02:16.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6549" for this suite. + +• [SLOW TEST:13.156 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":356,"completed":311,"skipped":5925,"failed":0} +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:02:16.783: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name projected-configmap-test-volume-map-8241d0e8-42bb-45a5-8f19-37f6298487fb +STEP: Creating a pod to test consume configMaps +Sep 7 09:02:16.846: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93aeb425-a5fa-4606-b2ae-fb28380c4d3c" in namespace "projected-7177" to be "Succeeded or Failed" +Sep 7 09:02:16.856: INFO: Pod "pod-projected-configmaps-93aeb425-a5fa-4606-b2ae-fb28380c4d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.514659ms +Sep 7 09:02:18.864: INFO: Pod "pod-projected-configmaps-93aeb425-a5fa-4606-b2ae-fb28380c4d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018121575s +Sep 7 09:02:20.877: INFO: Pod "pod-projected-configmaps-93aeb425-a5fa-4606-b2ae-fb28380c4d3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031173464s +STEP: Saw pod success +Sep 7 09:02:20.877: INFO: Pod "pod-projected-configmaps-93aeb425-a5fa-4606-b2ae-fb28380c4d3c" satisfied condition "Succeeded or Failed" +Sep 7 09:02:20.881: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-configmaps-93aeb425-a5fa-4606-b2ae-fb28380c4d3c container agnhost-container: +STEP: delete the pod +Sep 7 09:02:20.908: INFO: Waiting for pod pod-projected-configmaps-93aeb425-a5fa-4606-b2ae-fb28380c4d3c to disappear +Sep 7 09:02:20.912: INFO: Pod pod-projected-configmaps-93aeb425-a5fa-4606-b2ae-fb28380c4d3c no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:188 +Sep 7 09:02:20.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7177" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":312,"skipped":5925,"failed":0} +S +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:02:20.921: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a suspended cronjob +STEP: Ensuring no jobs are scheduled +STEP: Ensuring no job exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:188 +Sep 7 09:07:21.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-6909" for this suite. + +• [SLOW TEST:300.164 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":356,"completed":313,"skipped":5926,"failed":0} +S +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:07:21.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + test/e2e/framework/framework.go:652 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Sep 7 09:07:23.243: INFO: DNS probes using dns-9439/dns-test-0b3cdede-8752-4a49-8331-5f1e3a8d5a6f succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:188 +Sep 7 09:07:23.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9439" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":356,"completed":314,"skipped":5927,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:07:23.269: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:191 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 09:07:23.308: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Sep 7 09:07:23.320: INFO: The status of Pod pod-logs-websocket-803f3f0d-6b5c-4eb4-aca7-4d3c410c4a91 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:07:25.323: INFO: The status of Pod pod-logs-websocket-803f3f0d-6b5c-4eb4-aca7-4d3c410c4a91 is Running (Ready = true) +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:188 +Sep 7 09:07:25.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1039" for this suite. +•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":356,"completed":315,"skipped":5956,"failed":0} +SSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:07:25.360: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should add annotations for pods in rc [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating Agnhost RC +Sep 7 09:07:25.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-4508 create -f -' +Sep 7 09:07:27.941: INFO: stderr: "" +Sep 7 09:07:27.941: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Sep 7 09:07:28.956: INFO: Selector matched 1 pods for map[app:agnhost] +Sep 7 09:07:28.956: INFO: Found 0 / 1 +Sep 7 09:07:29.950: INFO: Selector matched 1 pods for map[app:agnhost] +Sep 7 09:07:29.950: INFO: Found 0 / 1 +Sep 7 09:07:30.955: INFO: Selector matched 1 pods for map[app:agnhost] +Sep 7 09:07:30.955: INFO: Found 1 / 1 +Sep 7 09:07:30.955: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Sep 7 09:07:30.958: INFO: Selector matched 1 pods for map[app:agnhost] +Sep 7 09:07:30.958: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Sep 7 09:07:30.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-4508 patch pod agnhost-primary-7ttsp -p {"metadata":{"annotations":{"x":"y"}}}' +Sep 7 09:07:31.074: INFO: stderr: "" +Sep 7 09:07:31.074: INFO: stdout: "pod/agnhost-primary-7ttsp patched\n" +STEP: checking annotations +Sep 7 09:07:31.077: INFO: Selector matched 1 pods for map[app:agnhost] +Sep 7 09:07:31.077: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 09:07:31.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4508" for this suite. + +• [SLOW TEST:5.724 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl patch + test/e2e/kubectl/kubectl.go:1486 + should add annotations for pods in rc [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":356,"completed":316,"skipped":5960,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:07:31.084: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename podtemplate +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should run the lifecycle of PodTemplates [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:188 +Sep 7 09:07:31.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-8593" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":356,"completed":317,"skipped":5974,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] version v1 + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:07:31.225: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 09:07:31.263: INFO: Creating pod... +Sep 7 09:07:33.282: INFO: Creating service... +Sep 7 09:07:33.294: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/pods/agnhost/proxy?method=DELETE +Sep 7 09:07:33.310: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Sep 7 09:07:33.310: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/pods/agnhost/proxy?method=OPTIONS +Sep 7 09:07:33.321: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Sep 7 09:07:33.321: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/pods/agnhost/proxy?method=PATCH +Sep 7 09:07:33.325: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Sep 7 09:07:33.325: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/pods/agnhost/proxy?method=POST +Sep 7 09:07:33.328: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Sep 7 09:07:33.328: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/pods/agnhost/proxy?method=PUT +Sep 7 09:07:33.331: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Sep 7 09:07:33.331: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/services/e2e-proxy-test-service/proxy?method=DELETE +Sep 7 09:07:33.336: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Sep 7 09:07:33.336: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/services/e2e-proxy-test-service/proxy?method=OPTIONS +Sep 7 09:07:33.350: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Sep 7 09:07:33.350: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/services/e2e-proxy-test-service/proxy?method=PATCH +Sep 7 09:07:33.355: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Sep 7 09:07:33.355: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/services/e2e-proxy-test-service/proxy?method=POST +Sep 7 09:07:33.360: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Sep 7 09:07:33.360: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/services/e2e-proxy-test-service/proxy?method=PUT +Sep 7 09:07:33.363: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Sep 7 09:07:33.363: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/pods/agnhost/proxy?method=GET +Sep 7 09:07:33.366: INFO: http.Client request:GET StatusCode:301 +Sep 7 09:07:33.366: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/services/e2e-proxy-test-service/proxy?method=GET +Sep 7 09:07:33.370: INFO: http.Client request:GET StatusCode:301 +Sep 7 09:07:33.370: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/pods/agnhost/proxy?method=HEAD +Sep 7 09:07:33.372: INFO: http.Client request:HEAD StatusCode:301 +Sep 7 09:07:33.372: INFO: Starting http.Client for https://10.68.0.1:443/api/v1/namespaces/proxy-7077/services/e2e-proxy-test-service/proxy?method=HEAD +Sep 7 09:07:33.376: INFO: http.Client request:HEAD StatusCode:301 +[AfterEach] version v1 + test/e2e/framework/framework.go:188 +Sep 7 09:07:33.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-7077" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]","total":356,"completed":318,"skipped":5983,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:07:33.386: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a job +STEP: Ensuring job reaches completions +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:188 +Sep 7 09:07:45.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-1040" for this suite. + +• [SLOW TEST:12.055 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":356,"completed":319,"skipped":5993,"failed":0} +SSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:07:45.441: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[It] should create and stop a working application [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating all guestbook components +Sep 7 09:07:45.495: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Sep 7 09:07:45.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 create -f -' +Sep 7 09:07:45.822: INFO: stderr: "" +Sep 7 09:07:45.822: INFO: stdout: "service/agnhost-replica created\n" +Sep 7 09:07:45.822: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Sep 7 09:07:45.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 create -f -' +Sep 7 09:07:46.212: INFO: stderr: "" +Sep 7 09:07:46.212: INFO: stdout: "service/agnhost-primary created\n" +Sep 7 09:07:46.212: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Sep 7 09:07:46.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 create -f -' +Sep 7 09:07:46.552: INFO: stderr: "" +Sep 7 09:07:46.552: INFO: stdout: "service/frontend created\n" +Sep 7 09:07:46.552: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: k8s.gcr.io/e2e-test-images/agnhost:2.39 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Sep 7 09:07:46.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 create -f -' +Sep 7 09:07:46.929: INFO: stderr: "" +Sep 7 09:07:46.929: INFO: stdout: "deployment.apps/frontend created\n" +Sep 7 09:07:46.929: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: k8s.gcr.io/e2e-test-images/agnhost:2.39 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Sep 7 09:07:46.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 create -f -' +Sep 7 09:07:48.758: INFO: stderr: "" +Sep 7 09:07:48.758: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Sep 7 09:07:48.758: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: k8s.gcr.io/e2e-test-images/agnhost:2.39 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Sep 7 09:07:48.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 create -f -' +Sep 7 09:07:49.872: INFO: stderr: "" +Sep 7 09:07:49.872: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app +Sep 7 09:07:49.872: INFO: Waiting for all frontend pods to be Running. +Sep 7 09:07:54.929: INFO: Waiting for frontend to serve content. +Sep 7 09:07:54.938: INFO: Trying to add a new entry to the guestbook. +Sep 7 09:07:54.947: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Sep 7 09:07:54.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 delete --grace-period=0 --force -f -' +Sep 7 09:07:55.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Sep 7 09:07:55.157: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources +Sep 7 09:07:55.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 delete --grace-period=0 --force -f -' +Sep 7 09:07:55.538: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Sep 7 09:07:55.538: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Sep 7 09:07:55.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 delete --grace-period=0 --force -f -' +Sep 7 09:07:55.721: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Sep 7 09:07:55.721: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Sep 7 09:07:55.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 delete --grace-period=0 --force -f -' +Sep 7 09:07:55.965: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Sep 7 09:07:55.965: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Sep 7 09:07:55.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 delete --grace-period=0 --force -f -' +Sep 7 09:07:56.220: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Sep 7 09:07:56.220: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Sep 7 09:07:56.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-3363 delete --grace-period=0 --force -f -' +Sep 7 09:07:56.471: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Sep 7 09:07:56.471: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 09:07:56.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3363" for this suite. + +• [SLOW TEST:11.040 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Guestbook application + test/e2e/kubectl/kubectl.go:340 + should create and stop a working application [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":356,"completed":320,"skipped":5998,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:07:56.481: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should serve a basic endpoint from pods [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating service endpoint-test2 in namespace services-2011 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2011 to expose endpoints map[] +Sep 7 09:07:56.607: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found +Sep 7 09:07:57.718: INFO: successfully validated that service endpoint-test2 in namespace services-2011 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-2011 +Sep 7 09:07:57.777: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:07:59.811: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2011 to expose endpoints map[pod1:[80]] +Sep 7 09:07:59.830: INFO: successfully validated that service endpoint-test2 in namespace services-2011 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 +Sep 7 09:07:59.830: INFO: Creating new exec pod +Sep 7 09:08:04.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2011 exec execpodnbwdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Sep 7 09:08:05.196: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Sep 7 09:08:05.197: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 09:08:05.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2011 exec execpodnbwdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.142.11 80' +Sep 7 09:08:05.668: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.142.11 80\nConnection to 10.68.142.11 80 port [tcp/http] succeeded!\n" +Sep 7 09:08:05.668: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Creating pod pod2 in namespace services-2011 +Sep 7 09:08:05.705: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:08:07.738: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2011 to expose endpoints map[pod1:[80] pod2:[80]] +Sep 7 09:08:07.781: INFO: successfully validated that service endpoint-test2 in namespace services-2011 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 +Sep 7 09:08:08.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2011 exec execpodnbwdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Sep 7 09:08:08.977: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Sep 7 09:08:08.977: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 09:08:08.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2011 exec execpodnbwdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.142.11 80' +Sep 7 09:08:09.172: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.142.11 80\nConnection to 10.68.142.11 80 port [tcp/http] succeeded!\n" +Sep 7 09:08:09.172: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-2011 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2011 to expose endpoints map[pod2:[80]] +Sep 7 09:08:10.235: INFO: successfully validated that service endpoint-test2 in namespace services-2011 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 +Sep 7 09:08:11.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2011 exec execpodnbwdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Sep 7 09:08:11.649: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Sep 7 09:08:11.649: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Sep 7 09:08:11.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-2011 exec execpodnbwdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.68.142.11 80' +Sep 7 09:08:11.846: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.68.142.11 80\nConnection to 10.68.142.11 80 port [tcp/http] succeeded!\n" +Sep 7 09:08:11.846: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod2 in namespace services-2011 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2011 to expose endpoints map[] +Sep 7 09:08:12.915: INFO: successfully validated that service endpoint-test2 in namespace services-2011 exposes endpoints map[] +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 09:08:12.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2011" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:16.539 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should serve a basic endpoint from pods [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":356,"completed":321,"skipped":6019,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] PreStop + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:08:13.020: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename prestop +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] PreStop + test/e2e/node/pre_stop.go:159 +[It] should call prestop when killing a pod [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating server pod server in namespace prestop-7458 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-7458 +STEP: Deleting pre-stop pod +Sep 7 09:08:22.193: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [sig-node] PreStop + test/e2e/framework/framework.go:188 +Sep 7 09:08:22.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-7458" for this suite. + +• [SLOW TEST:9.225 seconds] +[sig-node] PreStop +test/e2e/node/framework.go:23 + should call prestop when killing a pod [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":356,"completed":322,"skipped":6033,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:08:22.246: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:61 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 09:08:22.343: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:08:24.348: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:08:26.348: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Running (Ready = false) +Sep 7 09:08:28.356: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Running (Ready = false) +Sep 7 09:08:30.355: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Running (Ready = false) +Sep 7 09:08:32.354: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Running (Ready = false) +Sep 7 09:08:34.354: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Running (Ready = false) +Sep 7 09:08:36.349: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Running (Ready = false) +Sep 7 09:08:38.373: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Running (Ready = false) +Sep 7 09:08:40.356: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Running (Ready = false) +Sep 7 09:08:42.354: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Running (Ready = false) +Sep 7 09:08:44.364: INFO: The status of Pod test-webserver-c32708fc-1329-4110-848e-f6e7d171dbd9 is Running (Ready = true) +Sep 7 09:08:44.367: INFO: Container started at 2022-09-07 09:08:23 +0000 UTC, pod became ready at 2022-09-07 09:08:42 +0000 UTC +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:188 +Sep 7 09:08:44.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-3213" for this suite. + +• [SLOW TEST:22.134 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":356,"completed":323,"skipped":6044,"failed":0} +SS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:08:44.380: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-4175 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a new StatefulSet +Sep 7 09:08:44.465: INFO: Found 0 stateful pods, waiting for 3 +Sep 7 09:08:54.470: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 09:08:54.470: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 09:08:54.470: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 +Sep 7 09:08:54.500: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Sep 7 09:09:04.536: INFO: Updating stateful set ss2 +Sep 7 09:09:04.550: INFO: Waiting for Pod statefulset-4175/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +STEP: Restoring Pods to the correct revision when they are deleted +Sep 7 09:09:14.670: INFO: Found 1 stateful pods, waiting for 3 +Sep 7 09:09:24.719: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 09:09:24.719: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 09:09:24.719: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Sep 7 09:09:24.749: INFO: Updating stateful set ss2 +Sep 7 09:09:24.791: INFO: Waiting for Pod statefulset-4175/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +Sep 7 09:09:34.823: INFO: Updating stateful set ss2 +Sep 7 09:09:34.836: INFO: Waiting for StatefulSet statefulset-4175/ss2 to complete update +Sep 7 09:09:34.836: INFO: Waiting for Pod statefulset-4175/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Sep 7 09:09:44.853: INFO: Deleting all statefulset in ns statefulset-4175 +Sep 7 09:09:44.856: INFO: Scaling statefulset ss2 to 0 +Sep 7 09:09:54.887: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 09:09:54.891: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:188 +Sep 7 09:09:54.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4175" for this suite. + +• [SLOW TEST:70.585 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":356,"completed":324,"skipped":6046,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:09:54.966: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a ServiceAccount +STEP: watching for the ServiceAccount to be added +STEP: patching the ServiceAccount +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) +STEP: deleting the ServiceAccount +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:188 +Sep 7 09:09:55.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-8669" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":356,"completed":325,"skipped":6094,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:09:55.075: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 +[It] should create a PodDisruptionBudget [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pdb +STEP: Waiting for the pdb to be processed +STEP: updating the pdb +STEP: Waiting for the pdb to be processed +STEP: patching the pdb +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be deleted +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:188 +Sep 7 09:09:55.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-4701" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":356,"completed":326,"skipped":6107,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:09:55.194: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide podname only [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test downward API volume plugin +Sep 7 09:09:55.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3aa8b90c-6f45-479a-b3de-32cbe34ebad1" in namespace "downward-api-7901" to be "Succeeded or Failed" +Sep 7 09:09:55.278: INFO: Pod "downwardapi-volume-3aa8b90c-6f45-479a-b3de-32cbe34ebad1": Phase="Pending", Reason="", readiness=false. Elapsed: 29.318953ms +Sep 7 09:09:57.287: INFO: Pod "downwardapi-volume-3aa8b90c-6f45-479a-b3de-32cbe34ebad1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038757534s +Sep 7 09:09:59.300: INFO: Pod "downwardapi-volume-3aa8b90c-6f45-479a-b3de-32cbe34ebad1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051670459s +STEP: Saw pod success +Sep 7 09:09:59.300: INFO: Pod "downwardapi-volume-3aa8b90c-6f45-479a-b3de-32cbe34ebad1" satisfied condition "Succeeded or Failed" +Sep 7 09:09:59.303: INFO: Trying to get logs from node 172.31.51.96 pod downwardapi-volume-3aa8b90c-6f45-479a-b3de-32cbe34ebad1 container client-container: +STEP: delete the pod +Sep 7 09:09:59.329: INFO: Waiting for pod downwardapi-volume-3aa8b90c-6f45-479a-b3de-32cbe34ebad1 to disappear +Sep 7 09:09:59.333: INFO: Pod downwardapi-volume-3aa8b90c-6f45-479a-b3de-32cbe34ebad1 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:188 +Sep 7 09:09:59.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7901" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":356,"completed":327,"skipped":6116,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:09:59.339: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +Sep 7 09:09:59.440: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:10:01.475: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:10:03.454: INFO: The status of Pod test-pod is Running (Ready = true) +STEP: Creating hostNetwork=true pod +Sep 7 09:10:03.491: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:10:05.505: INFO: The status of Pod test-host-network-pod is Running (Ready = true) +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Sep 7 09:10:05.509: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7810 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 09:10:05.509: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 09:10:05.510: INFO: ExecWithOptions: Clientset creation +Sep 7 09:10:05.510: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7810/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Sep 7 09:10:05.608: INFO: Exec stderr: "" +Sep 7 09:10:05.608: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7810 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 09:10:05.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 09:10:05.609: INFO: ExecWithOptions: Clientset creation +Sep 7 09:10:05.609: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7810/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Sep 7 09:10:05.705: INFO: Exec stderr: "" +Sep 7 09:10:05.705: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7810 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 09:10:05.705: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 09:10:05.706: INFO: ExecWithOptions: Clientset creation +Sep 7 09:10:05.706: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7810/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Sep 7 09:10:05.782: INFO: Exec stderr: "" +Sep 7 09:10:05.782: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7810 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 09:10:05.782: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 09:10:05.782: INFO: ExecWithOptions: Clientset creation +Sep 7 09:10:05.782: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7810/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Sep 7 09:10:05.853: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Sep 7 09:10:05.853: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7810 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 09:10:05.853: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 09:10:05.854: INFO: ExecWithOptions: Clientset creation +Sep 7 09:10:05.854: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7810/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) +Sep 7 09:10:05.935: INFO: Exec stderr: "" +Sep 7 09:10:05.935: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7810 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 09:10:05.935: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 09:10:05.936: INFO: ExecWithOptions: Clientset creation +Sep 7 09:10:05.936: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7810/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) +Sep 7 09:10:06.016: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Sep 7 09:10:06.016: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7810 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 09:10:06.016: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 09:10:06.017: INFO: ExecWithOptions: Clientset creation +Sep 7 09:10:06.017: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7810/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Sep 7 09:10:06.092: INFO: Exec stderr: "" +Sep 7 09:10:06.092: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7810 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 09:10:06.092: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 09:10:06.092: INFO: ExecWithOptions: Clientset creation +Sep 7 09:10:06.092: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7810/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Sep 7 09:10:06.147: INFO: Exec stderr: "" +Sep 7 09:10:06.147: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7810 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 09:10:06.147: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 09:10:06.148: INFO: ExecWithOptions: Clientset creation +Sep 7 09:10:06.148: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7810/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Sep 7 09:10:06.208: INFO: Exec stderr: "" +Sep 7 09:10:06.208: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7810 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Sep 7 09:10:06.208: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +Sep 7 09:10:06.208: INFO: ExecWithOptions: Clientset creation +Sep 7 09:10:06.208: INFO: ExecWithOptions: execute(POST https://10.68.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7810/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Sep 7 09:10:06.261: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/framework.go:188 +Sep 7 09:10:06.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-7810" for this suite. + +• [SLOW TEST:6.944 seconds] +[sig-node] KubeletManagedEtcHosts +test/e2e/common/node/framework.go:23 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":328,"skipped":6141,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should replace a pod template [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:10:06.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename podtemplate +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should replace a pod template [Conformance] + test/e2e/framework/framework.go:652 +STEP: Create a pod template +STEP: Replace a pod template +Sep 7 09:10:06.329: INFO: Found updated podtemplate annotation: "true" + +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:188 +Sep 7 09:10:06.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-3960" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should replace a pod template [Conformance]","total":356,"completed":329,"skipped":6154,"failed":0} +SSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:10:06.336: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 +Sep 7 09:10:06.367: INFO: Waiting up to 1m0s for all nodes to be ready +Sep 7 09:11:06.388: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + test/e2e/framework/framework.go:652 +STEP: Create pods that use 4/5 of node resources. +Sep 7 09:11:06.426: INFO: Created pod: pod0-0-sched-preemption-low-priority +Sep 7 09:11:06.444: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Sep 7 09:11:06.479: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Sep 7 09:11:06.502: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a high priority pod that has same requirements as that of lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:188 +Sep 7 09:11:20.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-380" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 + +• [SLOW TEST:74.316 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + validates basic preemption works [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":356,"completed":330,"skipped":6160,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:11:20.652: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:61 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod test-webserver-ffc912ca-2701-4efd-b864-013866919771 in namespace container-probe-8723 +Sep 7 09:11:22.729: INFO: Started pod test-webserver-ffc912ca-2701-4efd-b864-013866919771 in namespace container-probe-8723 +STEP: checking the pod's current state and verifying that restartCount is present +Sep 7 09:11:22.734: INFO: Initial restart count of pod test-webserver-ffc912ca-2701-4efd-b864-013866919771 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:188 +Sep 7 09:15:24.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-8723" for this suite. + +• [SLOW TEST:243.582 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":356,"completed":331,"skipped":6168,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:15:24.235: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-6933 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-6933 +STEP: creating replication controller externalsvc in namespace services-6933 +I0907 09:15:24.349647 19 runners.go:193] Created replication controller with name: externalsvc, namespace: services-6933, replica count: 2 +I0907 09:15:27.401565 19 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Sep 7 09:15:27.426: INFO: Creating new exec pod +Sep 7 09:15:29.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-6933 exec execpodd4k8s -- /bin/sh -x -c nslookup nodeport-service.services-6933.svc.cluster.local' +Sep 7 09:15:29.661: INFO: stderr: "+ nslookup nodeport-service.services-6933.svc.cluster.local\n" +Sep 7 09:15:29.661: INFO: stdout: "Server:\t\t169.254.20.10\nAddress:\t169.254.20.10#53\n\nnodeport-service.services-6933.svc.cluster.local\tcanonical name = externalsvc.services-6933.svc.cluster.local.\nName:\texternalsvc.services-6933.svc.cluster.local\nAddress: 10.68.234.28\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-6933, will wait for the garbage collector to delete the pods +Sep 7 09:15:29.724: INFO: Deleting ReplicationController externalsvc took: 5.597439ms +Sep 7 09:15:29.824: INFO: Terminating ReplicationController externalsvc pods took: 100.798227ms +Sep 7 09:15:32.251: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 09:15:32.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6933" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:8.048 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":356,"completed":332,"skipped":6227,"failed":0} +SSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:15:32.283: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:245 +[BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:297 +[It] should scale a replication controller [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a replication controller +Sep 7 09:15:32.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 create -f -' +Sep 7 09:15:32.697: INFO: stderr: "" +Sep 7 09:15:32.697: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Sep 7 09:15:32.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Sep 7 09:15:32.920: INFO: stderr: "" +Sep 7 09:15:32.920: INFO: stdout: "update-demo-nautilus-5cct2 update-demo-nautilus-gttfj " +Sep 7 09:15:32.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-5cct2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 09:15:33.072: INFO: stderr: "" +Sep 7 09:15:33.072: INFO: stdout: "" +Sep 7 09:15:33.072: INFO: update-demo-nautilus-5cct2 is created but not running +Sep 7 09:15:38.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Sep 7 09:15:38.375: INFO: stderr: "" +Sep 7 09:15:38.375: INFO: stdout: "update-demo-nautilus-5cct2 update-demo-nautilus-gttfj " +Sep 7 09:15:38.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-5cct2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 09:15:38.531: INFO: stderr: "" +Sep 7 09:15:38.531: INFO: stdout: "true" +Sep 7 09:15:38.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-5cct2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Sep 7 09:15:38.638: INFO: stderr: "" +Sep 7 09:15:38.639: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Sep 7 09:15:38.639: INFO: validating pod update-demo-nautilus-5cct2 +Sep 7 09:15:38.643: INFO: got data: { + "image": "nautilus.jpg" +} + +Sep 7 09:15:38.643: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Sep 7 09:15:38.643: INFO: update-demo-nautilus-5cct2 is verified up and running +Sep 7 09:15:38.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-gttfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 09:15:38.751: INFO: stderr: "" +Sep 7 09:15:38.751: INFO: stdout: "true" +Sep 7 09:15:38.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-gttfj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Sep 7 09:15:38.861: INFO: stderr: "" +Sep 7 09:15:38.861: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Sep 7 09:15:38.861: INFO: validating pod update-demo-nautilus-gttfj +Sep 7 09:15:38.866: INFO: got data: { + "image": "nautilus.jpg" +} + +Sep 7 09:15:38.866: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Sep 7 09:15:38.866: INFO: update-demo-nautilus-gttfj is verified up and running +STEP: scaling down the replication controller +Sep 7 09:15:38.868: INFO: scanned /root for discovery docs: +Sep 7 09:15:38.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Sep 7 09:15:40.030: INFO: stderr: "" +Sep 7 09:15:40.030: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Sep 7 09:15:40.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Sep 7 09:15:40.150: INFO: stderr: "" +Sep 7 09:15:40.150: INFO: stdout: "update-demo-nautilus-5cct2 update-demo-nautilus-gttfj " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Sep 7 09:15:45.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Sep 7 09:15:45.276: INFO: stderr: "" +Sep 7 09:15:45.276: INFO: stdout: "update-demo-nautilus-gttfj " +Sep 7 09:15:45.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-gttfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 09:15:45.391: INFO: stderr: "" +Sep 7 09:15:45.391: INFO: stdout: "true" +Sep 7 09:15:45.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-gttfj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Sep 7 09:15:45.503: INFO: stderr: "" +Sep 7 09:15:45.503: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Sep 7 09:15:45.503: INFO: validating pod update-demo-nautilus-gttfj +Sep 7 09:15:45.508: INFO: got data: { + "image": "nautilus.jpg" +} + +Sep 7 09:15:45.508: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Sep 7 09:15:45.508: INFO: update-demo-nautilus-gttfj is verified up and running +STEP: scaling up the replication controller +Sep 7 09:15:45.510: INFO: scanned /root for discovery docs: +Sep 7 09:15:45.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Sep 7 09:15:45.708: INFO: stderr: "" +Sep 7 09:15:45.708: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Sep 7 09:15:45.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Sep 7 09:15:45.823: INFO: stderr: "" +Sep 7 09:15:45.823: INFO: stdout: "update-demo-nautilus-gttfj update-demo-nautilus-nptfl " +Sep 7 09:15:45.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-gttfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 09:15:45.951: INFO: stderr: "" +Sep 7 09:15:45.951: INFO: stdout: "true" +Sep 7 09:15:45.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-gttfj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Sep 7 09:15:46.102: INFO: stderr: "" +Sep 7 09:15:46.102: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Sep 7 09:15:46.102: INFO: validating pod update-demo-nautilus-gttfj +Sep 7 09:15:46.116: INFO: got data: { + "image": "nautilus.jpg" +} + +Sep 7 09:15:46.116: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Sep 7 09:15:46.116: INFO: update-demo-nautilus-gttfj is verified up and running +Sep 7 09:15:46.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-nptfl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 09:15:46.265: INFO: stderr: "" +Sep 7 09:15:46.265: INFO: stdout: "" +Sep 7 09:15:46.265: INFO: update-demo-nautilus-nptfl is created but not running +Sep 7 09:15:51.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Sep 7 09:15:51.375: INFO: stderr: "" +Sep 7 09:15:51.375: INFO: stdout: "update-demo-nautilus-gttfj update-demo-nautilus-nptfl " +Sep 7 09:15:51.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-gttfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 09:15:51.472: INFO: stderr: "" +Sep 7 09:15:51.472: INFO: stdout: "true" +Sep 7 09:15:51.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-gttfj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Sep 7 09:15:51.572: INFO: stderr: "" +Sep 7 09:15:51.572: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Sep 7 09:15:51.572: INFO: validating pod update-demo-nautilus-gttfj +Sep 7 09:15:51.576: INFO: got data: { + "image": "nautilus.jpg" +} + +Sep 7 09:15:51.576: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Sep 7 09:15:51.576: INFO: update-demo-nautilus-gttfj is verified up and running +Sep 7 09:15:51.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-nptfl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Sep 7 09:15:51.675: INFO: stderr: "" +Sep 7 09:15:51.675: INFO: stdout: "true" +Sep 7 09:15:51.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods update-demo-nautilus-nptfl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Sep 7 09:15:51.768: INFO: stderr: "" +Sep 7 09:15:51.768: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Sep 7 09:15:51.768: INFO: validating pod update-demo-nautilus-nptfl +Sep 7 09:15:51.779: INFO: got data: { + "image": "nautilus.jpg" +} + +Sep 7 09:15:51.779: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Sep 7 09:15:51.779: INFO: update-demo-nautilus-nptfl is verified up and running +STEP: using delete to clean up resources +Sep 7 09:15:51.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 delete --grace-period=0 --force -f -' +Sep 7 09:15:51.891: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Sep 7 09:15:51.891: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Sep 7 09:15:51.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get rc,svc -l name=update-demo --no-headers' +Sep 7 09:15:52.056: INFO: stderr: "No resources found in kubectl-6114 namespace.\n" +Sep 7 09:15:52.057: INFO: stdout: "" +Sep 7 09:15:52.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=kubectl-6114 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Sep 7 09:15:52.171: INFO: stderr: "" +Sep 7 09:15:52.171: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:188 +Sep 7 09:15:52.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6114" for this suite. + +• [SLOW TEST:19.905 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Update Demo + test/e2e/kubectl/kubectl.go:295 + should scale a replication controller [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":356,"completed":333,"skipped":6235,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:15:52.188: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name configmap-test-volume-9d3022f3-b401-4360-b2aa-a0acf78df443 +STEP: Creating a pod to test consume configMaps +Sep 7 09:15:52.253: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b002473-9816-4022-b49b-f154e585ad96" in namespace "configmap-4337" to be "Succeeded or Failed" +Sep 7 09:15:52.293: INFO: Pod "pod-configmaps-3b002473-9816-4022-b49b-f154e585ad96": Phase="Pending", Reason="", readiness=false. Elapsed: 40.51907ms +Sep 7 09:15:54.309: INFO: Pod "pod-configmaps-3b002473-9816-4022-b49b-f154e585ad96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056606637s +Sep 7 09:15:56.313: INFO: Pod "pod-configmaps-3b002473-9816-4022-b49b-f154e585ad96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060378768s +Sep 7 09:15:58.321: INFO: Pod "pod-configmaps-3b002473-9816-4022-b49b-f154e585ad96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068578706s +STEP: Saw pod success +Sep 7 09:15:58.321: INFO: Pod "pod-configmaps-3b002473-9816-4022-b49b-f154e585ad96" satisfied condition "Succeeded or Failed" +Sep 7 09:15:58.328: INFO: Trying to get logs from node 172.31.51.96 pod pod-configmaps-3b002473-9816-4022-b49b-f154e585ad96 container configmap-volume-test: +STEP: delete the pod +Sep 7 09:15:58.369: INFO: Waiting for pod pod-configmaps-3b002473-9816-4022-b49b-f154e585ad96 to disappear +Sep 7 09:15:58.372: INFO: Pod pod-configmaps-3b002473-9816-4022-b49b-f154e585ad96 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:188 +Sep 7 09:15:58.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4337" for this suite. + +• [SLOW TEST:6.198 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":356,"completed":334,"skipped":6248,"failed":0} +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:15:58.386: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should validate Replicaset Status endpoints [Conformance] + test/e2e/framework/framework.go:652 +STEP: Create a Replicaset +STEP: Verify that the required pods have come up. +Sep 7 09:15:58.451: INFO: Pod name sample-pod: Found 0 pods out of 1 +Sep 7 09:16:03.472: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Getting /status +Sep 7 09:16:03.489: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status +Sep 7 09:16:03.504: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated +Sep 7 09:16:03.505: INFO: Observed &ReplicaSet event: ADDED +Sep 7 09:16:03.506: INFO: Observed &ReplicaSet event: MODIFIED +Sep 7 09:16:03.506: INFO: Observed &ReplicaSet event: MODIFIED +Sep 7 09:16:03.511: INFO: Observed &ReplicaSet event: MODIFIED +Sep 7 09:16:03.511: INFO: Found replicaset test-rs in namespace replicaset-2304 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Sep 7 09:16:03.511: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status +Sep 7 09:16:03.511: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Sep 7 09:16:03.519: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched +Sep 7 09:16:03.524: INFO: Observed &ReplicaSet event: ADDED +Sep 7 09:16:03.524: INFO: Observed &ReplicaSet event: MODIFIED +Sep 7 09:16:03.524: INFO: Observed &ReplicaSet event: MODIFIED +Sep 7 09:16:03.524: INFO: Observed &ReplicaSet event: MODIFIED +Sep 7 09:16:03.524: INFO: Observed replicaset test-rs in namespace replicaset-2304 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Sep 7 09:16:03.524: INFO: Observed &ReplicaSet event: MODIFIED +Sep 7 09:16:03.524: INFO: Found replicaset test-rs in namespace replicaset-2304 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Sep 7 09:16:03.524: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:188 +Sep 7 09:16:03.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-2304" for this suite. + +• [SLOW TEST:5.172 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should validate Replicaset Status endpoints [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":356,"completed":335,"skipped":6248,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:16:03.558: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name cm-test-opt-del-6517c6cb-f6d7-4757-951f-9b0773564c0a +STEP: Creating configMap with name cm-test-opt-upd-c7c23b4c-8849-49d3-964d-c429721cec2c +STEP: Creating the pod +Sep 7 09:16:03.647: INFO: The status of Pod pod-projected-configmaps-91c15af4-d6bb-4320-985a-de1af8e40628 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:16:05.660: INFO: The status of Pod pod-projected-configmaps-91c15af4-d6bb-4320-985a-de1af8e40628 is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-6517c6cb-f6d7-4757-951f-9b0773564c0a +STEP: Updating configmap cm-test-opt-upd-c7c23b4c-8849-49d3-964d-c429721cec2c +STEP: Creating configMap with name cm-test-opt-create-8c107466-5468-48c0-b0b5-19ff29dc79f2 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:188 +Sep 7 09:16:07.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1098" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":356,"completed":336,"skipped":6254,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:16:07.759: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 09:16:07.802: INFO: Creating ReplicaSet my-hostname-basic-cce98c83-f87e-407b-b5cf-cc5190dc75cf +Sep 7 09:16:07.812: INFO: Pod name my-hostname-basic-cce98c83-f87e-407b-b5cf-cc5190dc75cf: Found 0 pods out of 1 +Sep 7 09:16:12.831: INFO: Pod name my-hostname-basic-cce98c83-f87e-407b-b5cf-cc5190dc75cf: Found 1 pods out of 1 +Sep 7 09:16:12.831: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-cce98c83-f87e-407b-b5cf-cc5190dc75cf" is running +Sep 7 09:16:12.840: INFO: Pod "my-hostname-basic-cce98c83-f87e-407b-b5cf-cc5190dc75cf-xtvtw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-07 09:16:07 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-07 09:16:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-07 09:16:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-07 09:16:07 +0000 UTC Reason: Message:}]) +Sep 7 09:16:12.840: INFO: Trying to dial the pod +Sep 7 09:16:17.856: INFO: Controller my-hostname-basic-cce98c83-f87e-407b-b5cf-cc5190dc75cf: Got expected result from replica 1 [my-hostname-basic-cce98c83-f87e-407b-b5cf-cc5190dc75cf-xtvtw]: "my-hostname-basic-cce98c83-f87e-407b-b5cf-cc5190dc75cf-xtvtw", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:188 +Sep 7 09:16:17.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-9551" for this suite. + +• [SLOW TEST:10.108 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":356,"completed":337,"skipped":6303,"failed":0} +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:16:17.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:191 +[It] should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 09:16:17.944: INFO: The status of Pod server-envvars-139b48ce-8fea-4bf2-a79c-930bed33436f is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:16:19.956: INFO: The status of Pod server-envvars-139b48ce-8fea-4bf2-a79c-930bed33436f is Running (Ready = true) +Sep 7 09:16:19.988: INFO: Waiting up to 5m0s for pod "client-envvars-919b6d3c-a516-4e26-804c-5f0c2dcfafc4" in namespace "pods-162" to be "Succeeded or Failed" +Sep 7 09:16:20.007: INFO: Pod "client-envvars-919b6d3c-a516-4e26-804c-5f0c2dcfafc4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.022459ms +Sep 7 09:16:22.019: INFO: Pod "client-envvars-919b6d3c-a516-4e26-804c-5f0c2dcfafc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03125426s +Sep 7 09:16:24.068: INFO: Pod "client-envvars-919b6d3c-a516-4e26-804c-5f0c2dcfafc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080134184s +Sep 7 09:16:26.077: INFO: Pod "client-envvars-919b6d3c-a516-4e26-804c-5f0c2dcfafc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089003935s +STEP: Saw pod success +Sep 7 09:16:26.077: INFO: Pod "client-envvars-919b6d3c-a516-4e26-804c-5f0c2dcfafc4" satisfied condition "Succeeded or Failed" +Sep 7 09:16:26.082: INFO: Trying to get logs from node 172.31.51.96 pod client-envvars-919b6d3c-a516-4e26-804c-5f0c2dcfafc4 container env3cont: +STEP: delete the pod +Sep 7 09:16:26.104: INFO: Waiting for pod client-envvars-919b6d3c-a516-4e26-804c-5f0c2dcfafc4 to disappear +Sep 7 09:16:26.106: INFO: Pod client-envvars-919b6d3c-a516-4e26-804c-5f0c2dcfafc4 no longer exists +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:188 +Sep 7 09:16:26.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-162" for this suite. + +• [SLOW TEST:8.248 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":356,"completed":338,"skipped":6303,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:16:26.116: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod pod-subpath-test-configmap-f9k8 +STEP: Creating a pod to test atomic-volume-subpath +Sep 7 09:16:26.188: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-f9k8" in namespace "subpath-9715" to be "Succeeded or Failed" +Sep 7 09:16:26.194: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.952275ms +Sep 7 09:16:28.206: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=true. Elapsed: 2.018144091s +Sep 7 09:16:30.212: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=true. Elapsed: 4.023740707s +Sep 7 09:16:32.233: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=true. Elapsed: 6.044265338s +Sep 7 09:16:34.247: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=true. Elapsed: 8.058577364s +Sep 7 09:16:36.252: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=true. Elapsed: 10.063934638s +Sep 7 09:16:38.261: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=true. Elapsed: 12.073066358s +Sep 7 09:16:40.274: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=true. Elapsed: 14.0859293s +Sep 7 09:16:42.285: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=true. Elapsed: 16.096569544s +Sep 7 09:16:44.301: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=true. Elapsed: 18.112494612s +Sep 7 09:16:46.305: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=true. Elapsed: 20.116751377s +Sep 7 09:16:48.322: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Running", Reason="", readiness=false. Elapsed: 22.134017287s +Sep 7 09:16:50.338: INFO: Pod "pod-subpath-test-configmap-f9k8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.149704851s +STEP: Saw pod success +Sep 7 09:16:50.338: INFO: Pod "pod-subpath-test-configmap-f9k8" satisfied condition "Succeeded or Failed" +Sep 7 09:16:50.342: INFO: Trying to get logs from node 172.31.51.96 pod pod-subpath-test-configmap-f9k8 container test-container-subpath-configmap-f9k8: +STEP: delete the pod +Sep 7 09:16:50.366: INFO: Waiting for pod pod-subpath-test-configmap-f9k8 to disappear +Sep 7 09:16:50.368: INFO: Pod pod-subpath-test-configmap-f9k8 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-f9k8 +Sep 7 09:16:50.368: INFO: Deleting pod "pod-subpath-test-configmap-f9k8" in namespace "subpath-9715" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:188 +Sep 7 09:16:50.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-9715" for this suite. + +• [SLOW TEST:24.262 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]","total":356,"completed":339,"skipped":6321,"failed":0} +SS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected combined + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:16:50.378: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name configmap-projected-all-test-volume-4a06b548-abdb-44a5-bb79-76448fd927ff +STEP: Creating secret with name secret-projected-all-test-volume-87ee4ffc-df61-4030-8521-0d81a1e21296 +STEP: Creating a pod to test Check all projections for projected volume plugin +Sep 7 09:16:50.433: INFO: Waiting up to 5m0s for pod "projected-volume-69530816-7c5f-4bbc-9edf-e27d4c952546" in namespace "projected-2443" to be "Succeeded or Failed" +Sep 7 09:16:50.462: INFO: Pod "projected-volume-69530816-7c5f-4bbc-9edf-e27d4c952546": Phase="Pending", Reason="", readiness=false. Elapsed: 29.053513ms +Sep 7 09:16:52.476: INFO: Pod "projected-volume-69530816-7c5f-4bbc-9edf-e27d4c952546": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042638707s +Sep 7 09:16:54.551: INFO: Pod "projected-volume-69530816-7c5f-4bbc-9edf-e27d4c952546": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118105159s +STEP: Saw pod success +Sep 7 09:16:54.551: INFO: Pod "projected-volume-69530816-7c5f-4bbc-9edf-e27d4c952546" satisfied condition "Succeeded or Failed" +Sep 7 09:16:54.555: INFO: Trying to get logs from node 172.31.51.96 pod projected-volume-69530816-7c5f-4bbc-9edf-e27d4c952546 container projected-all-volume-test: +STEP: delete the pod +Sep 7 09:16:54.576: INFO: Waiting for pod projected-volume-69530816-7c5f-4bbc-9edf-e27d4c952546 to disappear +Sep 7 09:16:54.578: INFO: Pod projected-volume-69530816-7c5f-4bbc-9edf-e27d4c952546 no longer exists +[AfterEach] [sig-storage] Projected combined + test/e2e/framework/framework.go:188 +Sep 7 09:16:54.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2443" for this suite. +•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":356,"completed":340,"skipped":6323,"failed":0} +SSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:16:54.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should delete old replica sets [Conformance] + test/e2e/framework/framework.go:652 +Sep 7 09:16:54.658: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Sep 7 09:16:59.684: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Sep 7 09:16:59.684: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Sep 7 09:16:59.740: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-8492 25b89f55-fff5-4cda-a69e-20a7a901e9e9 32716 1 2022-09-07 09:16:59 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2022-09-07 09:16:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0051bf428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + +Sep 7 09:16:59.752: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:188 +Sep 7 09:16:59.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-8492" for this suite. + +• [SLOW TEST:5.222 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":356,"completed":341,"skipped":6328,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:16:59.809: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6141 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-6141 +STEP: creating replication controller externalsvc in namespace services-6141 +I0907 09:16:59.936704 19 runners.go:193] Created replication controller with name: externalsvc, namespace: services-6141, replica count: 2 +I0907 09:17:02.994636 19 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName +Sep 7 09:17:03.023: INFO: Creating new exec pod +Sep 7 09:17:05.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=services-6141 exec execpod8j4m8 -- /bin/sh -x -c nslookup clusterip-service.services-6141.svc.cluster.local' +Sep 7 09:17:05.466: INFO: stderr: "+ nslookup clusterip-service.services-6141.svc.cluster.local\n" +Sep 7 09:17:05.466: INFO: stdout: "Server:\t\t169.254.20.10\nAddress:\t169.254.20.10#53\n\nclusterip-service.services-6141.svc.cluster.local\tcanonical name = externalsvc.services-6141.svc.cluster.local.\nName:\texternalsvc.services-6141.svc.cluster.local\nAddress: 10.68.208.238\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-6141, will wait for the garbage collector to delete the pods +Sep 7 09:17:05.531: INFO: Deleting ReplicationController externalsvc took: 7.687085ms +Sep 7 09:17:05.640: INFO: Terminating ReplicationController externalsvc pods took: 109.321604ms +Sep 7 09:17:07.867: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:188 +Sep 7 09:17:07.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6141" for this suite. +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + +• [SLOW TEST:8.092 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":356,"completed":342,"skipped":6366,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:17:07.901: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:61 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating pod liveness-9b2d23c6-ff64-4ae4-bf20-bcb9af507a98 in namespace container-probe-1742 +Sep 7 09:17:09.978: INFO: Started pod liveness-9b2d23c6-ff64-4ae4-bf20-bcb9af507a98 in namespace container-probe-1742 +STEP: checking the pod's current state and verifying that restartCount is present +Sep 7 09:17:09.982: INFO: Initial restart count of pod liveness-9b2d23c6-ff64-4ae4-bf20-bcb9af507a98 is 0 +Sep 7 09:17:30.096: INFO: Restart count of pod container-probe-1742/liveness-9b2d23c6-ff64-4ae4-bf20-bcb9af507a98 is now 1 (20.114074378s elapsed) +Sep 7 09:17:50.185: INFO: Restart count of pod container-probe-1742/liveness-9b2d23c6-ff64-4ae4-bf20-bcb9af507a98 is now 2 (40.203066484s elapsed) +Sep 7 09:18:10.273: INFO: Restart count of pod container-probe-1742/liveness-9b2d23c6-ff64-4ae4-bf20-bcb9af507a98 is now 3 (1m0.29101092s elapsed) +Sep 7 09:18:30.370: INFO: Restart count of pod container-probe-1742/liveness-9b2d23c6-ff64-4ae4-bf20-bcb9af507a98 is now 4 (1m20.388372632s elapsed) +Sep 7 09:19:30.735: INFO: Restart count of pod container-probe-1742/liveness-9b2d23c6-ff64-4ae4-bf20-bcb9af507a98 is now 5 (2m20.753434478s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:188 +Sep 7 09:19:30.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-1742" for this suite. + +• [SLOW TEST:142.857 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":356,"completed":343,"skipped":6381,"failed":0} +SSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:19:30.757: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:40 +[BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:84 +[It] should have an terminated reason [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:188 +Sep 7 09:19:34.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-1331" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":356,"completed":344,"skipped":6387,"failed":0} +SS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:19:34.893: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should delete a collection of events [Conformance] + test/e2e/framework/framework.go:652 +STEP: Create set of events +Sep 7 09:19:34.935: INFO: created test-event-1 +Sep 7 09:19:34.939: INFO: created test-event-2 +Sep 7 09:19:34.943: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace +STEP: delete collection of events +Sep 7 09:19:34.946: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +Sep 7 09:19:34.958: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:188 +Sep 7 09:19:34.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-8851" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":356,"completed":345,"skipped":6389,"failed":0} +SS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:19:34.970: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5590 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5590;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5590 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5590;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5590.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5590.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5590.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5590.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5590.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5590.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5590.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5590.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5590.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5590.svc;check="$$(dig +notcp +noall +answer +search 156.116.68.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.68.116.156_udp@PTR;check="$$(dig +tcp +noall +answer +search 156.116.68.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.68.116.156_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5590 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5590;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5590 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5590;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5590.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5590.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5590.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5590.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5590.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5590.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5590.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5590.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5590.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5590.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5590.svc;check="$$(dig +notcp +noall +answer +search 156.116.68.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.68.116.156_udp@PTR;check="$$(dig +tcp +noall +answer +search 156.116.68.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.68.116.156_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Sep 7 09:19:39.080: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.083: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.087: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.090: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.094: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.097: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.099: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.102: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.116: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.119: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.121: INFO: Unable to read jessie_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.124: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.130: INFO: Unable to read jessie_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.135: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.141: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.146: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:39.156: INFO: Lookups using dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5590 wheezy_tcp@dns-test-service.dns-5590 wheezy_udp@dns-test-service.dns-5590.svc wheezy_tcp@dns-test-service.dns-5590.svc wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5590 jessie_tcp@dns-test-service.dns-5590 jessie_udp@dns-test-service.dns-5590.svc jessie_tcp@dns-test-service.dns-5590.svc jessie_udp@_http._tcp.dns-test-service.dns-5590.svc jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc] + +Sep 7 09:19:44.162: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.165: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.168: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.170: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.177: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.179: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.181: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.183: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.197: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.199: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.201: INFO: Unable to read jessie_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.203: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.206: INFO: Unable to read jessie_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.208: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.210: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.212: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:44.221: INFO: Lookups using dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5590 wheezy_tcp@dns-test-service.dns-5590 wheezy_udp@dns-test-service.dns-5590.svc wheezy_tcp@dns-test-service.dns-5590.svc wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5590 jessie_tcp@dns-test-service.dns-5590 jessie_udp@dns-test-service.dns-5590.svc jessie_tcp@dns-test-service.dns-5590.svc jessie_udp@_http._tcp.dns-test-service.dns-5590.svc jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc] + +Sep 7 09:19:49.164: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.167: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.169: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.171: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.173: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.179: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.181: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.183: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.196: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.198: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.200: INFO: Unable to read jessie_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.202: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.205: INFO: Unable to read jessie_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.207: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.209: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.212: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:49.220: INFO: Lookups using dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5590 wheezy_tcp@dns-test-service.dns-5590 wheezy_udp@dns-test-service.dns-5590.svc wheezy_tcp@dns-test-service.dns-5590.svc wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5590 jessie_tcp@dns-test-service.dns-5590 jessie_udp@dns-test-service.dns-5590.svc jessie_tcp@dns-test-service.dns-5590.svc jessie_udp@_http._tcp.dns-test-service.dns-5590.svc jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc] + +Sep 7 09:19:54.161: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.169: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.178: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.181: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.184: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.186: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.200: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.203: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.217: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.220: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.224: INFO: Unable to read jessie_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.226: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.229: INFO: Unable to read jessie_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.231: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.233: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.235: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:54.251: INFO: Lookups using dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5590 wheezy_tcp@dns-test-service.dns-5590 wheezy_udp@dns-test-service.dns-5590.svc wheezy_tcp@dns-test-service.dns-5590.svc wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5590 jessie_tcp@dns-test-service.dns-5590 jessie_udp@dns-test-service.dns-5590.svc jessie_tcp@dns-test-service.dns-5590.svc jessie_udp@_http._tcp.dns-test-service.dns-5590.svc jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc] + +Sep 7 09:19:59.162: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.165: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.169: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.171: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.173: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.175: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.179: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.182: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.194: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.197: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.200: INFO: Unable to read jessie_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.203: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.206: INFO: Unable to read jessie_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.209: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.212: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.215: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:19:59.224: INFO: Lookups using dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5590 wheezy_tcp@dns-test-service.dns-5590 wheezy_udp@dns-test-service.dns-5590.svc wheezy_tcp@dns-test-service.dns-5590.svc wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5590 jessie_tcp@dns-test-service.dns-5590 jessie_udp@dns-test-service.dns-5590.svc jessie_tcp@dns-test-service.dns-5590.svc jessie_udp@_http._tcp.dns-test-service.dns-5590.svc jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc] + +Sep 7 09:20:04.163: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.166: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.168: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.171: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.173: INFO: Unable to read wheezy_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.175: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.178: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.180: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.192: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.194: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.196: INFO: Unable to read jessie_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.198: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.202: INFO: Unable to read jessie_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.213: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.216: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.220: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:04.242: INFO: Lookups using dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5590 wheezy_tcp@dns-test-service.dns-5590 wheezy_udp@dns-test-service.dns-5590.svc wheezy_tcp@dns-test-service.dns-5590.svc wheezy_udp@_http._tcp.dns-test-service.dns-5590.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5590.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5590 jessie_tcp@dns-test-service.dns-5590 jessie_udp@dns-test-service.dns-5590.svc jessie_tcp@dns-test-service.dns-5590.svc jessie_udp@_http._tcp.dns-test-service.dns-5590.svc jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc] + +Sep 7 09:20:09.187: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:09.191: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:09.193: INFO: Unable to read jessie_udp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:09.195: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590 from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:09.197: INFO: Unable to read jessie_udp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:09.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:09.202: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:09.204: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc from pod dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44: the server could not find the requested resource (get pods dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44) +Sep 7 09:20:09.212: INFO: Lookups using dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5590 jessie_tcp@dns-test-service.dns-5590 jessie_udp@dns-test-service.dns-5590.svc jessie_tcp@dns-test-service.dns-5590.svc jessie_udp@_http._tcp.dns-test-service.dns-5590.svc jessie_tcp@_http._tcp.dns-test-service.dns-5590.svc] + +Sep 7 09:20:14.269: INFO: DNS probes using dns-5590/dns-test-d6143871-c47b-4e6b-a9f6-9e55a78cde44 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:188 +Sep 7 09:20:14.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5590" for this suite. + +• [SLOW TEST:39.571 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":356,"completed":346,"skipped":6391,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:20:14.542: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-7818 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating stateful set ss in namespace statefulset-7818 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7818 +Sep 7 09:20:14.606: INFO: Found 0 stateful pods, waiting for 1 +Sep 7 09:20:24.617: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Sep 7 09:20:24.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-7818 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Sep 7 09:20:24.798: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Sep 7 09:20:24.798: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Sep 7 09:20:24.798: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Sep 7 09:20:24.802: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Sep 7 09:20:34.816: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Sep 7 09:20:34.816: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 09:20:34.835: INFO: POD NODE PHASE GRACE CONDITIONS +Sep 7 09:20:34.835: INFO: ss-0 172.31.51.96 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:14 +0000 UTC }] +Sep 7 09:20:34.835: INFO: +Sep 7 09:20:34.835: INFO: StatefulSet ss has not reached scale 3, at 1 +Sep 7 09:20:35.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991104099s +Sep 7 09:20:36.853: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982269136s +Sep 7 09:20:37.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975032253s +Sep 7 09:20:38.871: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.966027971s +Sep 7 09:20:39.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.956416618s +Sep 7 09:20:40.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.945435159s +Sep 7 09:20:41.904: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.934115441s +Sep 7 09:20:42.914: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.923015409s +Sep 7 09:20:43.924: INFO: Verifying statefulset ss doesn't scale past 3 for another 914.056839ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7818 +Sep 7 09:20:44.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-7818 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Sep 7 09:20:45.148: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Sep 7 09:20:45.148: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Sep 7 09:20:45.148: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Sep 7 09:20:45.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-7818 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Sep 7 09:20:45.329: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Sep 7 09:20:45.329: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Sep 7 09:20:45.329: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Sep 7 09:20:45.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-7818 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Sep 7 09:20:45.628: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Sep 7 09:20:45.628: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Sep 7 09:20:45.628: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Sep 7 09:20:45.633: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false +Sep 7 09:20:55.656: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 09:20:55.656: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Sep 7 09:20:55.656: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Sep 7 09:20:55.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-7818 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Sep 7 09:20:55.844: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Sep 7 09:20:55.844: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Sep 7 09:20:55.844: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Sep 7 09:20:55.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-7818 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Sep 7 09:20:56.085: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Sep 7 09:20:56.085: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Sep 7 09:20:56.085: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Sep 7 09:20:56.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1156948534 --namespace=statefulset-7818 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Sep 7 09:20:56.313: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Sep 7 09:20:56.313: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Sep 7 09:20:56.313: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Sep 7 09:20:56.313: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 09:20:56.318: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 +Sep 7 09:21:06.330: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Sep 7 09:21:06.330: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Sep 7 09:21:06.330: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Sep 7 09:21:06.356: INFO: POD NODE PHASE GRACE CONDITIONS +Sep 7 09:21:06.356: INFO: ss-0 172.31.51.96 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:14 +0000 UTC }] +Sep 7 09:21:06.356: INFO: ss-1 172.31.51.97 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:34 +0000 UTC }] +Sep 7 09:21:06.356: INFO: ss-2 172.31.51.96 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:34 +0000 UTC }] +Sep 7 09:21:06.356: INFO: +Sep 7 09:21:06.356: INFO: StatefulSet ss has not reached scale 0, at 3 +Sep 7 09:21:07.380: INFO: POD NODE PHASE GRACE CONDITIONS +Sep 7 09:21:07.381: INFO: ss-0 172.31.51.96 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:14 +0000 UTC }] +Sep 7 09:21:07.381: INFO: ss-2 172.31.51.96 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:34 +0000 UTC }] +Sep 7 09:21:07.381: INFO: +Sep 7 09:21:07.381: INFO: StatefulSet ss has not reached scale 0, at 2 +Sep 7 09:21:08.397: INFO: POD NODE PHASE GRACE CONDITIONS +Sep 7 09:21:08.397: INFO: ss-0 172.31.51.96 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:14 +0000 UTC }] +Sep 7 09:21:08.398: INFO: ss-2 172.31.51.96 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-07 09:20:34 +0000 UTC }] +Sep 7 09:21:08.398: INFO: +Sep 7 09:21:08.398: INFO: StatefulSet ss has not reached scale 0, at 2 +Sep 7 09:21:09.407: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.945088701s +Sep 7 09:21:10.418: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.934771267s +Sep 7 09:21:11.425: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.924729631s +Sep 7 09:21:12.433: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.917768468s +Sep 7 09:21:13.439: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.909023947s +Sep 7 09:21:14.448: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.902958996s +Sep 7 09:21:15.459: INFO: Verifying statefulset ss doesn't scale past 0 for another 894.029157ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7818 +Sep 7 09:21:16.465: INFO: Scaling statefulset ss to 0 +Sep 7 09:21:16.477: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Sep 7 09:21:16.480: INFO: Deleting all statefulset in ns statefulset-7818 +Sep 7 09:21:16.483: INFO: Scaling statefulset ss to 0 +Sep 7 09:21:16.496: INFO: Waiting for statefulset status.replicas updated to 0 +Sep 7 09:21:16.499: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:188 +Sep 7 09:21:16.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7818" for this suite. + +• [SLOW TEST:61.984 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":356,"completed":347,"skipped":6413,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:21:16.526: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +Sep 7 09:21:31.081: INFO: 71 pods remaining +Sep 7 09:21:31.081: INFO: 71 pods has nil DeletionTimestamp +Sep 7 09:21:31.081: INFO: +STEP: Gathering metrics +Sep 7 09:21:35.974: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Sep 7 09:21:35.974: INFO: Deleting pod "simpletest-rc-to-be-deleted-276jt" in namespace "gc-9398" +W0907 09:21:35.974060 19 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +Sep 7 09:21:36.011: INFO: Deleting pod "simpletest-rc-to-be-deleted-2kwb9" in namespace "gc-9398" +Sep 7 09:21:36.069: INFO: Deleting pod "simpletest-rc-to-be-deleted-2l6jm" in namespace "gc-9398" +Sep 7 09:21:36.154: INFO: Deleting pod "simpletest-rc-to-be-deleted-2r6sx" in namespace "gc-9398" +Sep 7 09:21:36.170: INFO: Deleting pod "simpletest-rc-to-be-deleted-2sz9v" in namespace "gc-9398" +Sep 7 09:21:36.185: INFO: Deleting pod "simpletest-rc-to-be-deleted-49jsb" in namespace "gc-9398" +Sep 7 09:21:36.207: INFO: Deleting pod "simpletest-rc-to-be-deleted-4mbdl" in namespace "gc-9398" +Sep 7 09:21:36.220: INFO: Deleting pod "simpletest-rc-to-be-deleted-4zzsx" in namespace "gc-9398" +Sep 7 09:21:36.242: INFO: Deleting pod "simpletest-rc-to-be-deleted-56msq" in namespace "gc-9398" +Sep 7 09:21:36.257: INFO: Deleting pod "simpletest-rc-to-be-deleted-5cmkc" in namespace "gc-9398" +Sep 7 09:21:36.268: INFO: Deleting pod "simpletest-rc-to-be-deleted-5kd7t" in namespace "gc-9398" +Sep 7 09:21:36.282: INFO: Deleting pod "simpletest-rc-to-be-deleted-5slpd" in namespace "gc-9398" +Sep 7 09:21:36.293: INFO: Deleting pod "simpletest-rc-to-be-deleted-5wwjf" in namespace "gc-9398" +Sep 7 09:21:36.329: INFO: Deleting pod "simpletest-rc-to-be-deleted-62k7b" in namespace "gc-9398" +Sep 7 09:21:36.410: INFO: Deleting pod "simpletest-rc-to-be-deleted-6ck8s" in namespace "gc-9398" +Sep 7 09:21:36.429: INFO: Deleting pod "simpletest-rc-to-be-deleted-6ff4r" in namespace "gc-9398" +Sep 7 09:21:36.502: INFO: Deleting pod "simpletest-rc-to-be-deleted-6g6sx" in namespace "gc-9398" +Sep 7 09:21:36.518: INFO: Deleting pod "simpletest-rc-to-be-deleted-7b4dh" in namespace "gc-9398" +Sep 7 09:21:36.546: INFO: Deleting pod "simpletest-rc-to-be-deleted-7csxq" in namespace "gc-9398" +Sep 7 09:21:36.645: INFO: Deleting pod "simpletest-rc-to-be-deleted-7m6zj" in namespace "gc-9398" +Sep 7 09:21:36.672: INFO: Deleting pod "simpletest-rc-to-be-deleted-7vjwp" in namespace "gc-9398" +Sep 7 09:21:36.696: INFO: Deleting pod "simpletest-rc-to-be-deleted-7w669" in namespace "gc-9398" +Sep 7 09:21:36.726: INFO: Deleting pod "simpletest-rc-to-be-deleted-825jk" in namespace "gc-9398" +Sep 7 09:21:36.739: INFO: Deleting pod "simpletest-rc-to-be-deleted-8j58k" in namespace "gc-9398" +Sep 7 09:21:36.758: INFO: Deleting pod "simpletest-rc-to-be-deleted-965w9" in namespace "gc-9398" +Sep 7 09:21:36.773: INFO: Deleting pod "simpletest-rc-to-be-deleted-96qcc" in namespace "gc-9398" +Sep 7 09:21:36.793: INFO: Deleting pod "simpletest-rc-to-be-deleted-978q6" in namespace "gc-9398" +Sep 7 09:21:36.806: INFO: Deleting pod "simpletest-rc-to-be-deleted-9lkzk" in namespace "gc-9398" +Sep 7 09:21:36.829: INFO: Deleting pod "simpletest-rc-to-be-deleted-bglt9" in namespace "gc-9398" +Sep 7 09:21:36.851: INFO: Deleting pod "simpletest-rc-to-be-deleted-bkwwr" in namespace "gc-9398" +Sep 7 09:21:36.863: INFO: Deleting pod "simpletest-rc-to-be-deleted-bm28s" in namespace "gc-9398" +Sep 7 09:21:36.878: INFO: Deleting pod "simpletest-rc-to-be-deleted-brv6h" in namespace "gc-9398" +Sep 7 09:21:36.888: INFO: Deleting pod "simpletest-rc-to-be-deleted-bw5ng" in namespace "gc-9398" +Sep 7 09:21:36.915: INFO: Deleting pod "simpletest-rc-to-be-deleted-c4z2v" in namespace "gc-9398" +Sep 7 09:21:36.926: INFO: Deleting pod "simpletest-rc-to-be-deleted-cttw7" in namespace "gc-9398" +Sep 7 09:21:36.948: INFO: Deleting pod "simpletest-rc-to-be-deleted-dchkb" in namespace "gc-9398" +Sep 7 09:21:36.978: INFO: Deleting pod "simpletest-rc-to-be-deleted-dkfh8" in namespace "gc-9398" +Sep 7 09:21:36.990: INFO: Deleting pod "simpletest-rc-to-be-deleted-dvscr" in namespace "gc-9398" +Sep 7 09:21:37.003: INFO: Deleting pod "simpletest-rc-to-be-deleted-dzwtw" in namespace "gc-9398" +Sep 7 09:21:37.022: INFO: Deleting pod "simpletest-rc-to-be-deleted-fd557" in namespace "gc-9398" +Sep 7 09:21:37.040: INFO: Deleting pod "simpletest-rc-to-be-deleted-fkjmj" in namespace "gc-9398" +Sep 7 09:21:37.047: INFO: Deleting pod "simpletest-rc-to-be-deleted-g4cqs" in namespace "gc-9398" +Sep 7 09:21:37.056: INFO: Deleting pod "simpletest-rc-to-be-deleted-g4zqj" in namespace "gc-9398" +Sep 7 09:21:37.069: INFO: Deleting pod "simpletest-rc-to-be-deleted-g8s6v" in namespace "gc-9398" +Sep 7 09:21:37.077: INFO: Deleting pod "simpletest-rc-to-be-deleted-gpxrv" in namespace "gc-9398" +Sep 7 09:21:37.088: INFO: Deleting pod "simpletest-rc-to-be-deleted-gq6bn" in namespace "gc-9398" +Sep 7 09:21:37.103: INFO: Deleting pod "simpletest-rc-to-be-deleted-h4ndd" in namespace "gc-9398" +Sep 7 09:21:37.114: INFO: Deleting pod "simpletest-rc-to-be-deleted-hb4sw" in namespace "gc-9398" +Sep 7 09:21:37.128: INFO: Deleting pod "simpletest-rc-to-be-deleted-hdqgk" in namespace "gc-9398" +Sep 7 09:21:37.138: INFO: Deleting pod "simpletest-rc-to-be-deleted-hhmjm" in namespace "gc-9398" +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:188 +Sep 7 09:21:37.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-9398" for this suite. + +• [SLOW TEST:20.632 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":356,"completed":348,"skipped":6423,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:21:37.158: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Sep 7 09:21:37.204: INFO: Waiting up to 5m0s for pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224" in namespace "emptydir-323" to be "Succeeded or Failed" +Sep 7 09:21:37.230: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 26.146938ms +Sep 7 09:21:39.250: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046322461s +Sep 7 09:21:41.255: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051310384s +Sep 7 09:21:43.284: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080389272s +Sep 7 09:21:45.349: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145367103s +Sep 7 09:21:47.400: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 10.196412548s +Sep 7 09:21:49.434: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 12.230430562s +Sep 7 09:21:51.456: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 14.25238266s +Sep 7 09:21:53.467: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 16.262777377s +Sep 7 09:21:55.473: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 18.269423739s +Sep 7 09:21:57.522: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 20.318349931s +Sep 7 09:21:59.553: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 22.3485888s +Sep 7 09:22:01.589: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 24.38542022s +Sep 7 09:22:03.762: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 26.558381143s +Sep 7 09:22:05.966: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Pending", Reason="", readiness=false. Elapsed: 28.762322046s +Sep 7 09:22:07.988: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Running", Reason="", readiness=true. Elapsed: 30.784368912s +Sep 7 09:22:10.025: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Running", Reason="", readiness=false. Elapsed: 32.82078693s +Sep 7 09:22:12.080: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Running", Reason="", readiness=false. Elapsed: 34.876351836s +Sep 7 09:22:14.107: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Running", Reason="", readiness=false. Elapsed: 36.903341163s +Sep 7 09:22:16.130: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Running", Reason="", readiness=false. Elapsed: 38.925969793s +Sep 7 09:22:18.149: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Running", Reason="", readiness=false. Elapsed: 40.944763939s +Sep 7 09:22:20.175: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Running", Reason="", readiness=false. Elapsed: 42.970711498s +Sep 7 09:22:22.208: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Running", Reason="", readiness=false. Elapsed: 45.00383272s +Sep 7 09:22:24.219: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224": Phase="Succeeded", Reason="", readiness=false. Elapsed: 47.015541978s +STEP: Saw pod success +Sep 7 09:22:24.220: INFO: Pod "pod-b6a8db29-220a-4a38-b63c-8cbbf567d224" satisfied condition "Succeeded or Failed" +Sep 7 09:22:24.222: INFO: Trying to get logs from node 172.31.51.96 pod pod-b6a8db29-220a-4a38-b63c-8cbbf567d224 container test-container: +STEP: delete the pod +Sep 7 09:22:24.263: INFO: Waiting for pod pod-b6a8db29-220a-4a38-b63c-8cbbf567d224 to disappear +Sep 7 09:22:24.266: INFO: Pod pod-b6a8db29-220a-4a38-b63c-8cbbf567d224 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:188 +Sep 7 09:22:24.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-323" for this suite. + +• [SLOW TEST:47.115 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":356,"completed":349,"skipped":6452,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:22:24.274: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/framework/framework.go:652 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +Sep 7 09:22:31.296: INFO: 80 pods remaining +Sep 7 09:22:31.296: INFO: 80 pods has nil DeletionTimestamp +Sep 7 09:22:31.296: INFO: +Sep 7 09:22:31.635: INFO: 75 pods remaining +Sep 7 09:22:31.635: INFO: 75 pods has nil DeletionTimestamp +Sep 7 09:22:31.635: INFO: +Sep 7 09:22:32.816: INFO: 59 pods remaining +Sep 7 09:22:32.816: INFO: 59 pods has nil DeletionTimestamp +Sep 7 09:22:32.816: INFO: +Sep 7 09:22:33.894: INFO: 43 pods remaining +Sep 7 09:22:33.894: INFO: 40 pods has nil DeletionTimestamp +Sep 7 09:22:33.894: INFO: +Sep 7 09:22:34.933: INFO: 32 pods remaining +Sep 7 09:22:34.933: INFO: 30 pods has nil DeletionTimestamp +Sep 7 09:22:34.933: INFO: +Sep 7 09:22:35.628: INFO: 19 pods remaining +Sep 7 09:22:35.628: INFO: 17 pods has nil DeletionTimestamp +Sep 7 09:22:35.628: INFO: +Sep 7 09:22:36.625: INFO: 4 pods remaining +Sep 7 09:22:36.625: INFO: 1 pods has nil DeletionTimestamp +Sep 7 09:22:36.625: INFO: +STEP: Gathering metrics +Sep 7 09:22:37.554: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:188 +Sep 7 09:22:37.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +W0907 09:22:37.554357 19 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. +STEP: Destroying namespace "gc-2643" for this suite. + +• [SLOW TEST:13.313 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":356,"completed":350,"skipped":6499,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:22:37.587: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should verify changes to a daemon set status [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Sep 7 09:22:37.773: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:37.773: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:38.801: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:38.801: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:39.789: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:39.789: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:40.814: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:40.814: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:41.802: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:41.802: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:42.792: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:42.792: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:43.801: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:43.801: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:44.797: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:44.797: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:45.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:45.867: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:46.958: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:46.958: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:47.804: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:47.804: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:48.787: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:22:48.787: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:49.790: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 09:22:49.790: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:50.826: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 09:22:50.826: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:51.831: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 09:22:51.831: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:52.830: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 09:22:52.830: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:54.037: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 09:22:54.038: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:54.971: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 09:22:54.971: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:55.897: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 09:22:55.897: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:56.901: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 09:22:56.901: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:22:57.917: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Sep 7 09:22:57.917: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Getting /status +Sep 7 09:22:57.944: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status +Sep 7 09:22:57.968: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated +Sep 7 09:22:57.988: INFO: Observed &DaemonSet event: ADDED +Sep 7 09:22:57.988: INFO: Observed &DaemonSet event: MODIFIED +Sep 7 09:22:57.988: INFO: Observed &DaemonSet event: MODIFIED +Sep 7 09:22:57.988: INFO: Observed &DaemonSet event: MODIFIED +Sep 7 09:22:57.988: INFO: Found daemon set daemon-set in namespace daemonsets-168 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Sep 7 09:22:57.988: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status +STEP: watching for the daemon set status to be patched +Sep 7 09:22:58.078: INFO: Observed &DaemonSet event: ADDED +Sep 7 09:22:58.079: INFO: Observed &DaemonSet event: MODIFIED +Sep 7 09:22:58.079: INFO: Observed &DaemonSet event: MODIFIED +Sep 7 09:22:58.079: INFO: Observed &DaemonSet event: MODIFIED +Sep 7 09:22:58.079: INFO: Observed daemon set daemon-set in namespace daemonsets-168 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Sep 7 09:22:58.079: INFO: Observed &DaemonSet event: MODIFIED +Sep 7 09:22:58.079: INFO: Found daemon set daemon-set in namespace daemonsets-168 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Sep 7 09:22:58.079: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-168, will wait for the garbage collector to delete the pods +Sep 7 09:22:58.163: INFO: Deleting DaemonSet.extensions daemon-set took: 8.161311ms +Sep 7 09:22:58.371: INFO: Terminating DaemonSet.extensions daemon-set pods took: 207.826305ms +Sep 7 09:23:07.501: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:23:07.501: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Sep 7 09:23:07.508: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"36496"},"items":null} + +Sep 7 09:23:07.519: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"36496"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:188 +Sep 7 09:23:07.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-168" for this suite. + +• [SLOW TEST:29.965 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should verify changes to a daemon set status [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":356,"completed":351,"skipped":6526,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:23:07.553: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating secret with name s-test-opt-del-540eb353-d6d8-4bdb-b538-cde73216a0a6 +STEP: Creating secret with name s-test-opt-upd-1f646b85-6b34-44f2-b153-46dae970bc38 +STEP: Creating the pod +Sep 7 09:23:07.665: INFO: The status of Pod pod-secrets-07305377-0677-4744-a6ef-e08bde768b22 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:23:09.682: INFO: The status of Pod pod-secrets-07305377-0677-4744-a6ef-e08bde768b22 is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:23:11.674: INFO: The status of Pod pod-secrets-07305377-0677-4744-a6ef-e08bde768b22 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-540eb353-d6d8-4bdb-b538-cde73216a0a6 +STEP: Updating secret s-test-opt-upd-1f646b85-6b34-44f2-b153-46dae970bc38 +STEP: Creating secret with name s-test-opt-create-e3b5b5e0-4111-4856-859a-dae95d8ad04f +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:188 +Sep 7 09:24:18.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7729" for this suite. + +• [SLOW TEST:70.656 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":356,"completed":352,"skipped":6532,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:24:18.209: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:191 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Sep 7 09:24:18.308: INFO: The status of Pod pod-update-activedeadlineseconds-ede600a3-bd7d-42cb-9930-e215bda9cbcd is Pending, waiting for it to be Running (with Ready = true) +Sep 7 09:24:20.318: INFO: The status of Pod pod-update-activedeadlineseconds-ede600a3-bd7d-42cb-9930-e215bda9cbcd is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Sep 7 09:24:20.843: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ede600a3-bd7d-42cb-9930-e215bda9cbcd" +Sep 7 09:24:20.843: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ede600a3-bd7d-42cb-9930-e215bda9cbcd" in namespace "pods-3199" to be "terminated due to deadline exceeded" +Sep 7 09:24:20.845: INFO: Pod "pod-update-activedeadlineseconds-ede600a3-bd7d-42cb-9930-e215bda9cbcd": Phase="Running", Reason="", readiness=true. Elapsed: 2.16706ms +Sep 7 09:24:22.851: INFO: Pod "pod-update-activedeadlineseconds-ede600a3-bd7d-42cb-9930-e215bda9cbcd": Phase="Running", Reason="", readiness=true. Elapsed: 2.007490828s +Sep 7 09:24:24.863: INFO: Pod "pod-update-activedeadlineseconds-ede600a3-bd7d-42cb-9930-e215bda9cbcd": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.019580008s +Sep 7 09:24:24.863: INFO: Pod "pod-update-activedeadlineseconds-ede600a3-bd7d-42cb-9930-e215bda9cbcd" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:188 +Sep 7 09:24:24.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-3199" for this suite. + +• [SLOW TEST:6.665 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":356,"completed":353,"skipped":6564,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:24:24.874: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating configMap with name projected-configmap-test-volume-f1619c3c-5f0c-45ef-80d6-361c8f1c31ae +STEP: Creating a pod to test consume configMaps +Sep 7 09:24:24.940: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49142dbc-74b9-4081-b932-89e85c39dcd8" in namespace "projected-4807" to be "Succeeded or Failed" +Sep 7 09:24:24.946: INFO: Pod "pod-projected-configmaps-49142dbc-74b9-4081-b932-89e85c39dcd8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.559709ms +Sep 7 09:24:26.953: INFO: Pod "pod-projected-configmaps-49142dbc-74b9-4081-b932-89e85c39dcd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013279817s +Sep 7 09:24:28.971: INFO: Pod "pod-projected-configmaps-49142dbc-74b9-4081-b932-89e85c39dcd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031155281s +Sep 7 09:24:30.984: INFO: Pod "pod-projected-configmaps-49142dbc-74b9-4081-b932-89e85c39dcd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043881064s +STEP: Saw pod success +Sep 7 09:24:30.984: INFO: Pod "pod-projected-configmaps-49142dbc-74b9-4081-b932-89e85c39dcd8" satisfied condition "Succeeded or Failed" +Sep 7 09:24:30.989: INFO: Trying to get logs from node 172.31.51.96 pod pod-projected-configmaps-49142dbc-74b9-4081-b932-89e85c39dcd8 container projected-configmap-volume-test: +STEP: delete the pod +Sep 7 09:24:31.011: INFO: Waiting for pod pod-projected-configmaps-49142dbc-74b9-4081-b932-89e85c39dcd8 to disappear +Sep 7 09:24:31.015: INFO: Pod pod-projected-configmaps-49142dbc-74b9-4081-b932-89e85c39dcd8 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:188 +Sep 7 09:24:31.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4807" for this suite. + +• [SLOW TEST:6.149 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/framework/framework.go:652 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":356,"completed":354,"skipped":6588,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:24:31.023: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/framework/framework.go:652 +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:188 +Sep 7 09:24:35.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-8404" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":356,"completed":355,"skipped":6606,"failed":0} +SSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + test/e2e/framework/framework.go:652 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +STEP: Creating a kubernetes client +Sep 7 09:24:35.236: INFO: >>> kubeConfig: /tmp/kubeconfig-1156948534 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should list and delete a collection of DaemonSets [Conformance] + test/e2e/framework/framework.go:652 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Sep 7 09:24:35.324: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Sep 7 09:24:35.324: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:24:36.364: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Sep 7 09:24:36.364: INFO: Node 172.31.51.96 is running 0 daemon pod, expected 1 +Sep 7 09:24:37.334: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Sep 7 09:24:37.334: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: listing all DeamonSets +STEP: DeleteCollection of the DaemonSets +STEP: Verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +Sep 7 09:24:37.356: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"36819"},"items":null} + +Sep 7 09:24:37.364: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"36819"},"items":[{"metadata":{"name":"daemon-set-6wdgj","generateName":"daemon-set-","namespace":"daemonsets-6596","uid":"b618414f-ee17-4c0d-90d8-42e0a8449d15","resourceVersion":"36816","creationTimestamp":"2022-09-07T09:24:35Z","labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"7bafb470-3d30-4374-b546-6a761a13e5b7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-07T09:24:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7bafb470-3d30-4374-b546-6a761a13e5b7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-07T09:24:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.75.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-zmpvt","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-zmpvt","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"172.31.51.96","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["172.31.51.96"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-07T09:24:35Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-07T09:24:36Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-07T09:24:36Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-07T09:24:35Z"}],"hostIP":"172.31.51.96","podIP":"172.20.75.42","podIPs":[{"ip":"172.20.75.42"}],"startTime":"2022-09-07T09:24:35Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-09-07T09:24:36Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://d7f8230d471ebc24f91f42512679a0551213b66d82616efc509bf8eaef31cbe0","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-9cqsj","generateName":"daemon-set-","namespace":"daemonsets-6596","uid":"6d6de9b9-a971-4a3a-a489-2ce78db0c4f9","resourceVersion":"36799","creationTimestamp":"2022-09-07T09:24:35Z","labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"7bafb470-3d30-4374-b546-6a761a13e5b7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-07T09:24:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7bafb470-3d30-4374-b546-6a761a13e5b7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-07T09:24:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"172.20.97.117\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-bnrpp","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-bnrpp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"172.31.51.97","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["172.31.51.97"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-07T09:24:35Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-07T09:24:36Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-07T09:24:36Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-07T09:24:35Z"}],"hostIP":"172.31.51.97","podIP":"172.20.97.117","podIPs":[{"ip":"172.20.97.117"}],"startTime":"2022-09-07T09:24:35Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-09-07T09:24:36Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://73b735bf0878c3473896c81c8d165dd1ca2cb113505c4b9c1c59872c4a65ca58","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:188 +Sep 7 09:24:37.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6596" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":356,"completed":356,"skipped":6610,"failed":0} +SSSSSSep 7 09:24:37.402: INFO: Running AfterSuite actions on all nodes +Sep 7 09:24:37.402: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 +Sep 7 09:24:37.402: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 +Sep 7 09:24:37.402: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 +Sep 7 09:24:37.402: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 +Sep 7 09:24:37.402: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 +Sep 7 09:24:37.402: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 +Sep 7 09:24:37.402: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 +Sep 7 09:24:37.402: INFO: Running AfterSuite actions on node 1 +Sep 7 09:24:37.402: INFO: Skipping dumping logs from cluster + +JUnit report was created: /tmp/sonobuoy/results/junit_01.xml +{"msg":"Test Suite completed","total":356,"completed":356,"skipped":6615,"failed":0} + +Ran 356 of 6971 Specs in 6279.298 seconds +SUCCESS! -- 356 Passed | 0 Failed | 0 Pending | 6615 Skipped +PASS + +Ginkgo ran 1 suite in 1h44m41.63637367s +Test Suite Passed diff --git a/v1.24/kubeoperator/junit_01.xml b/v1.24/kubeoperator/junit_01.xml new file mode 100644 index 0000000000..d401411ea7 --- /dev/null +++ b/v1.24/kubeoperator/junit_01.xml @@ -0,0 +1,20204 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file