diff --git a/v1.25/MetalK8s/PRODUCT.yaml b/v1.25/MetalK8s/PRODUCT.yaml new file mode 100644 index 0000000000..4a9808952f --- /dev/null +++ b/v1.25/MetalK8s/PRODUCT.yaml @@ -0,0 +1,10 @@ +vendor: Scality +name: MetalK8s +description: "An opinionated Kubernetes distribution with a focus on long-term on-prem deployments" +version: 125.0 +type: distribution +website_url: https://github.com/scality/metalk8s/ +repo_url: https://github.com/scality/metalk8s.git +product_logo_url: https://raw.githubusercontent.com/scality/metalk8s/development/125.0/artwork/metalk8s-logo-vertical.svg +documentation_url: https://metal-k8s.readthedocs.io/en/development-125.0/ +contact_email_address: squad-metalk8s@scality.com diff --git a/v1.25/MetalK8s/README.md b/v1.25/MetalK8s/README.md new file mode 100644 index 0000000000..6aab070c1c --- /dev/null +++ b/v1.25/MetalK8s/README.md @@ -0,0 +1,116 @@ +# MetalK8s +Official documentation: https://metal-k8s.readthedocs.io/en/development-125.0/ + +## Prerequisites +- An OpenStack cluster +- The official CentOS 7.9 2009 image pre-loaded in Glance +- Three VMs with 8 vCPUs, 16 GB of RAM, 40GB of local storage + +## Provisioning +- Create two private network in the OpenStack cluster with port security + disabled, and a subnet in each: + + * Control-plane network: 192.168.1.0/24 + * Workload-plane network: 192.168.2.0/24 + +- Create VM instances using the CentOS 7.9 image, and attach each of them to a + public network (for internet access) and the two private networks. + +- Configure the interface for private networks (make sure to fill in the + appropriate MAC address): + + ``` + $ cat > /etc/sysconfig/network-scripts/ifcfg-eth1 << EOF + BOOTPROTO=dhcp + DEVICE=eth1 + HWADDR=... + ONBOOT=yes + TYPE=Ethernet + USERCTL=no + PEERDNS=no + EOF + $ cat > /etc/sysconfig/network-scripts/ifcfg-eth2 << EOF + BOOTPROTO=dhcp + DEVICE=eth2 + HWADDR=... + ONBOOT=yes + TYPE=Ethernet + USERCTL=no + PEERDNS=no + EOF + $ systemctl restart network + ``` + +### Provisioning the Bootstrap Node +On one of the VMs, which will act as the *bootstrap* node, perform the following +steps: + +- Set up the Salt Minion ID: + + ``` + $ mkdir /etc/salt; chmod 0700 /etc/salt + $ echo metalk8s-bootstrap > /etc/salt/minion_id + ``` + +- Download MetalK8s ISO to `/home/centos/metalk8s.iso` + +- Create `/etc/metalk8s/bootstrap.yaml`: + + ``` + $ mkdir /etc/metalk8s + $ cat > /etc/metalk8s/bootstrap.yaml << EOF + apiVersion: metalk8s.scality.com/v1alpha3 + kind: BootstrapConfiguration + networks: + controlPlane: + cidr: 192.168.1.0/24 + workloadPlane: + cidr: 192.168.2.0/24 + portmap: + cidr: 0.0.0.0/0 + nodeport: + cidr: 0.0.0.0/0 + ca: + minion: metalk8s-bootstrap + archives: + - /home/centos/metalk8s.iso + EOF + ``` + +- Bootstrap the cluster + + ``` + $ mkdir /mnt/metalk8s + $ mount /home/centos/metalk8s.iso /mnt/metalk8s + $ cd /mnt/metalk8s + $ ./bootstrap.sh + ``` + +### Provisioning the Cluster Nodes +Add the 2 other nodes to the cluster according to the procedure outlined in the +MetalK8s documentation. The easiest way to achieve this is through the MetalK8s +UI. + +## Preparing the Cluster to Run Sonobuoy +On the *bootstrap* node: + +- Configure access to the Kubernetes API server + + ``` + $ export KUBECONFIG=/etc/kubernetes/admin.conf + ``` + +- Remove taints from the node, which would prevent the Sonobuoy *Pod*s from + being scheduled: + + ``` + $ kubectl taint node metalk8s-bootstrap node-role.kubernetes.io/bootstrap- + node/metalk8s-bootstrap untainted + $ kubectl taint node metalk8s-bootstrap node-role.kubernetes.io/infra- + node/metalk8s-bootstrap untainted + ``` + +## Running Sonobuoy and Collecting Results +Follow the +[instructions](https://github.com/cncf/k8s-conformance/blob/master/instructions.md) +as found in the [CNCF K8s Conformance repository](https://github.com/cncf/k8s-conformance). diff --git a/v1.25/MetalK8s/e2e.log b/v1.25/MetalK8s/e2e.log new file mode 100644 index 0000000000..5bf5b6835d --- /dev/null +++ b/v1.25/MetalK8s/e2e.log @@ -0,0 +1,33839 @@ +I0307 02:25:18.941561 22 e2e.go:116] Starting e2e run "6324f2f6-a3ba-451f-b5e1-c00345bec06a" on Ginkgo node 1 +Mar 7 02:25:18.954: INFO: Enabling in-tree volume drivers +Running Suite: Kubernetes e2e suite - /usr/local/bin +==================================================== +Random Seed: 1678155918 - will randomize all specs + +Will run 362 of 7066 specs +------------------------------ +[SynchronizedBeforeSuite] +test/e2e/e2e.go:76 +[SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:76 +{"msg":"Test Suite starting","completed":0,"skipped":0,"failed":0} +Mar 7 02:25:19.038: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 02:25:19.039: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +E0307 02:25:19.040629 22 progress.go:80] Failed to post progress update to http://localhost:8099/progress: Post "http://localhost:8099/progress": dial tcp [::1]:8099: connect: connection refused +E0307 02:25:19.040629 22 progress.go:80] Failed to post progress update to http://localhost:8099/progress: Post "http://localhost:8099/progress": dial tcp [::1]:8099: connect: connection refused +Mar 7 02:25:19.061: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Mar 7 02:25:19.094: INFO: The status of Pod backup-replication-wkdpp-lt4dt is Succeeded, skipping waiting +Mar 7 02:25:19.094: INFO: 29 / 30 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Mar 7 02:25:19.094: INFO: expected 6 pod replicas in namespace 'kube-system', 6 are Running and Ready. +Mar 7 02:25:19.094: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Mar 7 02:25:19.098: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) +Mar 7 02:25:19.098: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Mar 7 02:25:19.098: INFO: e2e test version: v1.25.5 +Mar 7 02:25:19.099: INFO: kube-apiserver version: v1.25.5 +[SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:76 +Mar 7 02:25:19.099: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 02:25:19.105: INFO: Cluster IP family: ipv4 +------------------------------ +[SynchronizedBeforeSuite] PASSED [0.067 seconds] +[SynchronizedBeforeSuite] +test/e2e/e2e.go:76 + + Begin Captured GinkgoWriter Output >> + [SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:76 + Mar 7 02:25:19.038: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 02:25:19.039: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable + E0307 02:25:19.040629 22 progress.go:80] Failed to post progress update to http://localhost:8099/progress: Post "http://localhost:8099/progress": dial tcp [::1]:8099: connect: connection refused + Mar 7 02:25:19.061: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready + Mar 7 02:25:19.094: INFO: The status of Pod backup-replication-wkdpp-lt4dt is Succeeded, skipping waiting + Mar 7 02:25:19.094: INFO: 29 / 30 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) + Mar 7 02:25:19.094: INFO: expected 6 pod replicas in namespace 'kube-system', 6 are Running and Ready. + Mar 7 02:25:19.094: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start + Mar 7 02:25:19.098: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) + Mar 7 02:25:19.098: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) + Mar 7 02:25:19.098: INFO: e2e test version: v1.25.5 + Mar 7 02:25:19.099: INFO: kube-apiserver version: v1.25.5 + [SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:76 + Mar 7 02:25:19.099: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 02:25:19.105: INFO: Cluster IP family: ipv4 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:129 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:25:19.122 +Mar 7 02:25:19.123: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 02:25:19.123 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:25:19.138 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:25:19.141 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:129 +STEP: Creating the pod 03/07/23 02:25:19.143 +Mar 7 02:25:19.163: INFO: Waiting up to 5m0s for pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc" in namespace "projected-1675" to be "running and ready" +Mar 7 02:25:19.165: INFO: Pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153118ms +Mar 7 02:25:19.165: INFO: The phase of Pod labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:25:21.168: INFO: Pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004905212s +Mar 7 02:25:21.168: INFO: The phase of Pod labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:25:23.169: INFO: Pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006233951s +Mar 7 02:25:23.169: INFO: The phase of Pod labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:25:25.168: INFO: Pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc": Phase="Running", Reason="", readiness=true. Elapsed: 6.005036665s +Mar 7 02:25:25.168: INFO: The phase of Pod labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc is Running (Ready = true) +Mar 7 02:25:25.168: INFO: Pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc" satisfied condition "running and ready" +Mar 7 02:25:25.713: INFO: Successfully updated pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc" +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 02:25:29.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1675" for this suite. 03/07/23 02:25:29.734 +{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","completed":1,"skipped":7,"failed":0} +------------------------------ +• [SLOW TEST] [10.617 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:129 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:25:19.122 + Mar 7 02:25:19.123: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 02:25:19.123 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:25:19.138 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:25:19.141 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:129 + STEP: Creating the pod 03/07/23 02:25:19.143 + Mar 7 02:25:19.163: INFO: Waiting up to 5m0s for pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc" in namespace "projected-1675" to be "running and ready" + Mar 7 02:25:19.165: INFO: Pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153118ms + Mar 7 02:25:19.165: INFO: The phase of Pod labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:25:21.168: INFO: Pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004905212s + Mar 7 02:25:21.168: INFO: The phase of Pod labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:25:23.169: INFO: Pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006233951s + Mar 7 02:25:23.169: INFO: The phase of Pod labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:25:25.168: INFO: Pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc": Phase="Running", Reason="", readiness=true. Elapsed: 6.005036665s + Mar 7 02:25:25.168: INFO: The phase of Pod labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc is Running (Ready = true) + Mar 7 02:25:25.168: INFO: Pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc" satisfied condition "running and ready" + Mar 7 02:25:25.713: INFO: Successfully updated pod "labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc" + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 02:25:29.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-1675" for this suite. 03/07/23 02:25:29.734 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:438 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:25:29.74 +Mar 7 02:25:29.740: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sched-pred 03/07/23 02:25:29.742 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:25:29.753 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:25:29.755 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 +Mar 7 02:25:29.756: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Mar 7 02:25:29.762: INFO: Waiting for terminating namespaces to be deleted... +Mar 7 02:25:29.764: INFO: +Logging pods the apiserver thinks is on node bootstrap before test +Mar 7 02:25:29.777: INFO: apiserver-proxy-bootstrap from kube-system started at 2023-03-07 00:42:52 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container nginx ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: backup-747d8c577b-wdcvl from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container backup ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: backup-replication-wkdpp-lt4dt from kube-system started at 2023-03-07 00:47:50 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container backup-replication ready: false, restart count 0 +Mar 7 02:25:29.777: INFO: calico-kube-controllers-59685599d8-pvn74 from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container calico-kube-controllers ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: calico-node-mlncm from kube-system started at 2023-03-07 02:23:53 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container calico-node ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: etcd-bootstrap from kube-system started at 2023-03-07 00:43:13 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container etcd ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: kube-apiserver-bootstrap from kube-system started at 2023-03-07 00:43:25 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: kube-controller-manager-bootstrap from kube-system started at 2023-03-07 00:43:33 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container kube-controller-manager ready: true, restart count 4 +Mar 7 02:25:29.777: INFO: kube-proxy-nlf5t from kube-system started at 2023-03-07 02:23:30 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: kube-scheduler-bootstrap from kube-system started at 2023-03-07 00:43:34 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container kube-scheduler ready: true, restart count 3 +Mar 7 02:25:29.777: INFO: metalk8s-operator-controller-manager-7d4764b947-crj2f from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container manager ready: true, restart count 5 +Mar 7 02:25:29.777: INFO: repositories-bootstrap from kube-system started at 2023-03-07 02:07:15 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container repositories ready: true, restart count 1 +Mar 7 02:25:29.777: INFO: salt-master-bootstrap from kube-system started at 2023-03-07 00:42:29 +0000 UTC (2 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container salt-api ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: Container salt-master ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: storage-operator-78f5dcc84f-jwnzl from kube-system started at 2023-03-07 00:45:28 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container manager ready: true, restart count 4 +Mar 7 02:25:29.777: INFO: dex-57f9db7c4-hbrhr from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container dex ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: dex-57f9db7c4-z6gh6 from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container dex ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: ingress-control-plane-managed-vip-n2qb6 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container keepalived ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: ingress-nginx-control-plane-controller-j9hsf from metalk8s-ingress started at 2023-03-07 00:45:27 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container controller ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: ingress-nginx-controller-vjnvw from metalk8s-ingress started at 2023-03-07 02:10:07 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container controller ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: ingress-nginx-defaultbackend-75c64bd745-65gwj from metalk8s-ingress started at 2023-03-07 00:45:24 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container ingress-nginx-default-backend ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: fluent-bit-dzhms from metalk8s-logging started at 2023-03-07 00:45:38 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: metalk8s-alert-logger-84f87c86d-hflm5 from metalk8s-monitoring started at 2023-03-07 00:45:09 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container metalk8s-alert-logger ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: prometheus-adapter-6696954b59-qrxtn from metalk8s-monitoring started at 2023-03-07 00:45:34 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container prometheus-adapter ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: prometheus-operator-kube-state-metrics-f7d5dc499-t4szw from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container kube-state-metrics ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: prometheus-operator-operator-864bc5b5d-8m6lq from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container prometheus-operator ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: prometheus-operator-prometheus-node-exporter-sl4bq from metalk8s-monitoring started at 2023-03-07 00:45:18 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: thanos-query-6b9dc579dd-ctlrl from metalk8s-monitoring started at 2023-03-07 00:45:22 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container thanos-query ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: metalk8s-ui-766c8b96cd-8cxcs from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container metalk8s-ui ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: metalk8s-ui-766c8b96cd-tsx5v from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container metalk8s-ui ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 02:25:29.777: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: Container systemd-logs ready: true, restart count 0 +Mar 7 02:25:29.777: INFO: +Logging pods the apiserver thinks is on node node-1 before test +Mar 7 02:25:29.789: INFO: apiserver-proxy-node-1 from kube-system started at 2023-03-07 00:58:52 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container nginx ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: calico-node-fvlp2 from kube-system started at 2023-03-07 02:23:42 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container calico-node ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: coredns-5d7b997fcf-z25jb from kube-system started at 2023-03-07 02:09:04 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container coredns ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: etcd-node-1 from kube-system started at 2023-03-07 00:59:16 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container etcd ready: true, restart count 1 +Mar 7 02:25:29.789: INFO: kube-apiserver-node-1 from kube-system started at 2023-03-07 01:00:05 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: kube-controller-manager-node-1 from kube-system started at 2023-03-07 01:00:17 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container kube-controller-manager ready: true, restart count 2 +Mar 7 02:25:29.789: INFO: kube-proxy-vpgsc from kube-system started at 2023-03-07 02:23:27 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: kube-scheduler-node-1 from kube-system started at 2023-03-07 01:00:18 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container kube-scheduler ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: ingress-control-plane-managed-vip-w2cb9 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container keepalived ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: ingress-nginx-control-plane-controller-ck4wk from metalk8s-ingress started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container controller ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: ingress-nginx-controller-9b2bj from metalk8s-ingress started at 2023-03-07 02:10:40 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container controller ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: fluent-bit-4nw7s from metalk8s-logging started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: loki-0 from metalk8s-logging started at 2023-03-07 01:11:45 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container single-binary ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: alertmanager-prometheus-operator-alertmanager-0 from metalk8s-monitoring started at 2023-03-07 01:11:00 +0000 UTC (2 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container alertmanager ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: Container config-reloader ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: prometheus-operator-prometheus-node-exporter-4plkr from metalk8s-monitoring started at 2023-03-07 00:58:56 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: prometheus-prometheus-operator-prometheus-0 from metalk8s-monitoring started at 2023-03-07 01:11:10 +0000 UTC (3 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container config-reloader ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: Container prometheus ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: Container thanos-sidecar ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-gktsb from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 02:25:29.789: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: Container systemd-logs ready: true, restart count 0 +Mar 7 02:25:29.789: INFO: +Logging pods the apiserver thinks is on node node-2 before test +Mar 7 02:25:29.802: INFO: apiserver-proxy-node-2 from kube-system started at 2023-03-07 01:07:13 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container nginx ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: calico-node-r7qqp from kube-system started at 2023-03-07 02:23:32 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container calico-node ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: coredns-5d7b997fcf-4gkfq from kube-system started at 2023-03-07 02:09:10 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container coredns ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: etcd-node-2 from kube-system started at 2023-03-07 01:08:10 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container etcd ready: true, restart count 2 +Mar 7 02:25:29.802: INFO: kube-apiserver-node-2 from kube-system started at 2023-03-07 01:09:12 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: kube-controller-manager-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container kube-controller-manager ready: true, restart count 1 +Mar 7 02:25:29.802: INFO: kube-proxy-wsc86 from kube-system started at 2023-03-07 02:23:33 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: kube-scheduler-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container kube-scheduler ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: ingress-control-plane-managed-vip-5gjbz from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container keepalived ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: ingress-nginx-control-plane-controller-rgvzx from metalk8s-ingress started at 2023-03-07 01:09:40 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container controller ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: ingress-nginx-controller-9l4f9 from metalk8s-ingress started at 2023-03-07 02:09:35 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container controller ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: fluent-bit-577mn from metalk8s-logging started at 2023-03-07 01:09:40 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: prometheus-operator-grafana-74d86d5965-7wv2f from metalk8s-monitoring started at 2023-03-07 01:58:51 +0000 UTC (3 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container grafana ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: Container grafana-sc-dashboard ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: Container grafana-sc-datasources ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: prometheus-operator-prometheus-node-exporter-vmtsz from metalk8s-monitoring started at 2023-03-07 01:07:17 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc from projected-1675 started at 2023-03-07 02:25:19 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container client-container ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: sonobuoy from sonobuoy started at 2023-03-07 02:24:57 +0000 UTC (1 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container kube-sonobuoy ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: sonobuoy-e2e-job-441ced38a9a5443b from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container e2e ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-hbmvq from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 02:25:29.802: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 02:25:29.802: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:438 +STEP: Trying to schedule Pod with nonempty NodeSelector. 03/07/23 02:25:29.802 +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.174a01ed4b48bc3a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 03/07/23 02:25:29.836 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 +Mar 7 02:25:30.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-8025" for this suite. 03/07/23 02:25:30.836 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","completed":2,"skipped":21,"failed":0} +------------------------------ +• [1.122 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:438 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:25:29.74 + Mar 7 02:25:29.740: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sched-pred 03/07/23 02:25:29.742 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:25:29.753 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:25:29.755 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 + Mar 7 02:25:29.756: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Mar 7 02:25:29.762: INFO: Waiting for terminating namespaces to be deleted... + Mar 7 02:25:29.764: INFO: + Logging pods the apiserver thinks is on node bootstrap before test + Mar 7 02:25:29.777: INFO: apiserver-proxy-bootstrap from kube-system started at 2023-03-07 00:42:52 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container nginx ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: backup-747d8c577b-wdcvl from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container backup ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: backup-replication-wkdpp-lt4dt from kube-system started at 2023-03-07 00:47:50 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container backup-replication ready: false, restart count 0 + Mar 7 02:25:29.777: INFO: calico-kube-controllers-59685599d8-pvn74 from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container calico-kube-controllers ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: calico-node-mlncm from kube-system started at 2023-03-07 02:23:53 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container calico-node ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: etcd-bootstrap from kube-system started at 2023-03-07 00:43:13 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container etcd ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: kube-apiserver-bootstrap from kube-system started at 2023-03-07 00:43:25 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: kube-controller-manager-bootstrap from kube-system started at 2023-03-07 00:43:33 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container kube-controller-manager ready: true, restart count 4 + Mar 7 02:25:29.777: INFO: kube-proxy-nlf5t from kube-system started at 2023-03-07 02:23:30 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: kube-scheduler-bootstrap from kube-system started at 2023-03-07 00:43:34 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container kube-scheduler ready: true, restart count 3 + Mar 7 02:25:29.777: INFO: metalk8s-operator-controller-manager-7d4764b947-crj2f from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container manager ready: true, restart count 5 + Mar 7 02:25:29.777: INFO: repositories-bootstrap from kube-system started at 2023-03-07 02:07:15 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container repositories ready: true, restart count 1 + Mar 7 02:25:29.777: INFO: salt-master-bootstrap from kube-system started at 2023-03-07 00:42:29 +0000 UTC (2 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container salt-api ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: Container salt-master ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: storage-operator-78f5dcc84f-jwnzl from kube-system started at 2023-03-07 00:45:28 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container manager ready: true, restart count 4 + Mar 7 02:25:29.777: INFO: dex-57f9db7c4-hbrhr from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container dex ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: dex-57f9db7c4-z6gh6 from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container dex ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: ingress-control-plane-managed-vip-n2qb6 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container keepalived ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: ingress-nginx-control-plane-controller-j9hsf from metalk8s-ingress started at 2023-03-07 00:45:27 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container controller ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: ingress-nginx-controller-vjnvw from metalk8s-ingress started at 2023-03-07 02:10:07 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container controller ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: ingress-nginx-defaultbackend-75c64bd745-65gwj from metalk8s-ingress started at 2023-03-07 00:45:24 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container ingress-nginx-default-backend ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: fluent-bit-dzhms from metalk8s-logging started at 2023-03-07 00:45:38 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: metalk8s-alert-logger-84f87c86d-hflm5 from metalk8s-monitoring started at 2023-03-07 00:45:09 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container metalk8s-alert-logger ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: prometheus-adapter-6696954b59-qrxtn from metalk8s-monitoring started at 2023-03-07 00:45:34 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container prometheus-adapter ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: prometheus-operator-kube-state-metrics-f7d5dc499-t4szw from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container kube-state-metrics ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: prometheus-operator-operator-864bc5b5d-8m6lq from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container prometheus-operator ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: prometheus-operator-prometheus-node-exporter-sl4bq from metalk8s-monitoring started at 2023-03-07 00:45:18 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: thanos-query-6b9dc579dd-ctlrl from metalk8s-monitoring started at 2023-03-07 00:45:22 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container thanos-query ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: metalk8s-ui-766c8b96cd-8cxcs from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container metalk8s-ui ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: metalk8s-ui-766c8b96cd-tsx5v from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container metalk8s-ui ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 02:25:29.777: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: Container systemd-logs ready: true, restart count 0 + Mar 7 02:25:29.777: INFO: + Logging pods the apiserver thinks is on node node-1 before test + Mar 7 02:25:29.789: INFO: apiserver-proxy-node-1 from kube-system started at 2023-03-07 00:58:52 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container nginx ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: calico-node-fvlp2 from kube-system started at 2023-03-07 02:23:42 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container calico-node ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: coredns-5d7b997fcf-z25jb from kube-system started at 2023-03-07 02:09:04 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container coredns ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: etcd-node-1 from kube-system started at 2023-03-07 00:59:16 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container etcd ready: true, restart count 1 + Mar 7 02:25:29.789: INFO: kube-apiserver-node-1 from kube-system started at 2023-03-07 01:00:05 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: kube-controller-manager-node-1 from kube-system started at 2023-03-07 01:00:17 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container kube-controller-manager ready: true, restart count 2 + Mar 7 02:25:29.789: INFO: kube-proxy-vpgsc from kube-system started at 2023-03-07 02:23:27 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: kube-scheduler-node-1 from kube-system started at 2023-03-07 01:00:18 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container kube-scheduler ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: ingress-control-plane-managed-vip-w2cb9 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container keepalived ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: ingress-nginx-control-plane-controller-ck4wk from metalk8s-ingress started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container controller ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: ingress-nginx-controller-9b2bj from metalk8s-ingress started at 2023-03-07 02:10:40 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container controller ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: fluent-bit-4nw7s from metalk8s-logging started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: loki-0 from metalk8s-logging started at 2023-03-07 01:11:45 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container single-binary ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: alertmanager-prometheus-operator-alertmanager-0 from metalk8s-monitoring started at 2023-03-07 01:11:00 +0000 UTC (2 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container alertmanager ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: Container config-reloader ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: prometheus-operator-prometheus-node-exporter-4plkr from metalk8s-monitoring started at 2023-03-07 00:58:56 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: prometheus-prometheus-operator-prometheus-0 from metalk8s-monitoring started at 2023-03-07 01:11:10 +0000 UTC (3 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container config-reloader ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: Container prometheus ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: Container thanos-sidecar ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-gktsb from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 02:25:29.789: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: Container systemd-logs ready: true, restart count 0 + Mar 7 02:25:29.789: INFO: + Logging pods the apiserver thinks is on node node-2 before test + Mar 7 02:25:29.802: INFO: apiserver-proxy-node-2 from kube-system started at 2023-03-07 01:07:13 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container nginx ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: calico-node-r7qqp from kube-system started at 2023-03-07 02:23:32 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container calico-node ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: coredns-5d7b997fcf-4gkfq from kube-system started at 2023-03-07 02:09:10 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container coredns ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: etcd-node-2 from kube-system started at 2023-03-07 01:08:10 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container etcd ready: true, restart count 2 + Mar 7 02:25:29.802: INFO: kube-apiserver-node-2 from kube-system started at 2023-03-07 01:09:12 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: kube-controller-manager-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container kube-controller-manager ready: true, restart count 1 + Mar 7 02:25:29.802: INFO: kube-proxy-wsc86 from kube-system started at 2023-03-07 02:23:33 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: kube-scheduler-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container kube-scheduler ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: ingress-control-plane-managed-vip-5gjbz from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container keepalived ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: ingress-nginx-control-plane-controller-rgvzx from metalk8s-ingress started at 2023-03-07 01:09:40 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container controller ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: ingress-nginx-controller-9l4f9 from metalk8s-ingress started at 2023-03-07 02:09:35 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container controller ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: fluent-bit-577mn from metalk8s-logging started at 2023-03-07 01:09:40 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: prometheus-operator-grafana-74d86d5965-7wv2f from metalk8s-monitoring started at 2023-03-07 01:58:51 +0000 UTC (3 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container grafana ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: Container grafana-sc-dashboard ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: Container grafana-sc-datasources ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: prometheus-operator-prometheus-node-exporter-vmtsz from metalk8s-monitoring started at 2023-03-07 01:07:17 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: labelsupdate1236565b-c3f4-41d1-aaf7-bad22581b0fc from projected-1675 started at 2023-03-07 02:25:19 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container client-container ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: sonobuoy from sonobuoy started at 2023-03-07 02:24:57 +0000 UTC (1 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container kube-sonobuoy ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: sonobuoy-e2e-job-441ced38a9a5443b from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container e2e ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-hbmvq from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 02:25:29.802: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 02:25:29.802: INFO: Container systemd-logs ready: true, restart count 0 + [It] validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:438 + STEP: Trying to schedule Pod with nonempty NodeSelector. 03/07/23 02:25:29.802 + STEP: Considering event: + Type = [Warning], Name = [restricted-pod.174a01ed4b48bc3a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 03/07/23 02:25:29.836 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 + Mar 7 02:25:30.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sched-pred-8025" for this suite. 03/07/23 02:25:30.836 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:25:30.863 +Mar 7 02:25:30.864: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename subpath 03/07/23 02:25:30.864 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:25:30.878 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:25:30.88 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 03/07/23 02:25:30.882 +[It] should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 +STEP: Creating pod pod-subpath-test-downwardapi-m6rs 03/07/23 02:25:30.888 +STEP: Creating a pod to test atomic-volume-subpath 03/07/23 02:25:30.888 +Mar 7 02:25:30.896: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-m6rs" in namespace "subpath-2042" to be "Succeeded or Failed" +Mar 7 02:25:30.898: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637216ms +Mar 7 02:25:32.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 2.006750462s +Mar 7 02:25:34.901: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 4.005175481s +Mar 7 02:25:36.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 6.006179487s +Mar 7 02:25:38.901: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 8.005612059s +Mar 7 02:25:40.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 10.006213739s +Mar 7 02:25:42.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 12.006118771s +Mar 7 02:25:44.901: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 14.005711766s +Mar 7 02:25:46.904: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 16.008019299s +Mar 7 02:25:48.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 18.006646085s +Mar 7 02:25:50.903: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 20.007298958s +Mar 7 02:25:52.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=false. Elapsed: 22.006521874s +Mar 7 02:25:54.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.006191404s +STEP: Saw pod success 03/07/23 02:25:54.902 +Mar 7 02:25:54.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs" satisfied condition "Succeeded or Failed" +Mar 7 02:25:54.904: INFO: Trying to get logs from node node-2 pod pod-subpath-test-downwardapi-m6rs container test-container-subpath-downwardapi-m6rs: +STEP: delete the pod 03/07/23 02:25:54.91 +Mar 7 02:25:54.920: INFO: Waiting for pod pod-subpath-test-downwardapi-m6rs to disappear +Mar 7 02:25:54.922: INFO: Pod pod-subpath-test-downwardapi-m6rs no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-m6rs 03/07/23 02:25:54.922 +Mar 7 02:25:54.922: INFO: Deleting pod "pod-subpath-test-downwardapi-m6rs" in namespace "subpath-2042" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 +Mar 7 02:25:54.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-2042" for this suite. 03/07/23 02:25:54.928 +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]","completed":3,"skipped":46,"failed":0} +------------------------------ +• [SLOW TEST] [24.070 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:25:30.863 + Mar 7 02:25:30.864: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename subpath 03/07/23 02:25:30.864 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:25:30.878 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:25:30.88 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 03/07/23 02:25:30.882 + [It] should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 + STEP: Creating pod pod-subpath-test-downwardapi-m6rs 03/07/23 02:25:30.888 + STEP: Creating a pod to test atomic-volume-subpath 03/07/23 02:25:30.888 + Mar 7 02:25:30.896: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-m6rs" in namespace "subpath-2042" to be "Succeeded or Failed" + Mar 7 02:25:30.898: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637216ms + Mar 7 02:25:32.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 2.006750462s + Mar 7 02:25:34.901: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 4.005175481s + Mar 7 02:25:36.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 6.006179487s + Mar 7 02:25:38.901: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 8.005612059s + Mar 7 02:25:40.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 10.006213739s + Mar 7 02:25:42.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 12.006118771s + Mar 7 02:25:44.901: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 14.005711766s + Mar 7 02:25:46.904: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 16.008019299s + Mar 7 02:25:48.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 18.006646085s + Mar 7 02:25:50.903: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=true. Elapsed: 20.007298958s + Mar 7 02:25:52.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Running", Reason="", readiness=false. Elapsed: 22.006521874s + Mar 7 02:25:54.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.006191404s + STEP: Saw pod success 03/07/23 02:25:54.902 + Mar 7 02:25:54.902: INFO: Pod "pod-subpath-test-downwardapi-m6rs" satisfied condition "Succeeded or Failed" + Mar 7 02:25:54.904: INFO: Trying to get logs from node node-2 pod pod-subpath-test-downwardapi-m6rs container test-container-subpath-downwardapi-m6rs: + STEP: delete the pod 03/07/23 02:25:54.91 + Mar 7 02:25:54.920: INFO: Waiting for pod pod-subpath-test-downwardapi-m6rs to disappear + Mar 7 02:25:54.922: INFO: Pod pod-subpath-test-downwardapi-m6rs no longer exists + STEP: Deleting pod pod-subpath-test-downwardapi-m6rs 03/07/23 02:25:54.922 + Mar 7 02:25:54.922: INFO: Deleting pod "pod-subpath-test-downwardapi-m6rs" in namespace "subpath-2042" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 + Mar 7 02:25:54.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "subpath-2042" for this suite. 03/07/23 02:25:54.928 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:111 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:25:54.933 +Mar 7 02:25:54.933: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename var-expansion 03/07/23 02:25:54.934 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:25:54.948 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:25:54.95 +[It] should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:111 +STEP: Creating a pod to test substitution in volume subpath 03/07/23 02:25:54.951 +Mar 7 02:25:54.958: INFO: Waiting up to 5m0s for pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b" in namespace "var-expansion-1479" to be "Succeeded or Failed" +Mar 7 02:25:54.961: INFO: Pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.023424ms +Mar 7 02:25:56.965: INFO: Pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006688332s +Mar 7 02:25:58.965: INFO: Pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006141005s +Mar 7 02:26:00.964: INFO: Pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.005769197s +STEP: Saw pod success 03/07/23 02:26:00.964 +Mar 7 02:26:00.964: INFO: Pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b" satisfied condition "Succeeded or Failed" +Mar 7 02:26:00.966: INFO: Trying to get logs from node node-2 pod var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b container dapi-container: +STEP: delete the pod 03/07/23 02:26:00.971 +Mar 7 02:26:01.002: INFO: Waiting for pod var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b to disappear +Mar 7 02:26:01.004: INFO: Pod var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +Mar 7 02:26:01.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-1479" for this suite. 03/07/23 02:26:01.006 +{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","completed":4,"skipped":47,"failed":0} +------------------------------ +• [SLOW TEST] [6.078 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:111 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:25:54.933 + Mar 7 02:25:54.933: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename var-expansion 03/07/23 02:25:54.934 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:25:54.948 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:25:54.95 + [It] should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:111 + STEP: Creating a pod to test substitution in volume subpath 03/07/23 02:25:54.951 + Mar 7 02:25:54.958: INFO: Waiting up to 5m0s for pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b" in namespace "var-expansion-1479" to be "Succeeded or Failed" + Mar 7 02:25:54.961: INFO: Pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.023424ms + Mar 7 02:25:56.965: INFO: Pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006688332s + Mar 7 02:25:58.965: INFO: Pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006141005s + Mar 7 02:26:00.964: INFO: Pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.005769197s + STEP: Saw pod success 03/07/23 02:26:00.964 + Mar 7 02:26:00.964: INFO: Pod "var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b" satisfied condition "Succeeded or Failed" + Mar 7 02:26:00.966: INFO: Trying to get logs from node node-2 pod var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b container dapi-container: + STEP: delete the pod 03/07/23 02:26:00.971 + Mar 7 02:26:01.002: INFO: Waiting for pod var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b to disappear + Mar 7 02:26:01.004: INFO: Pod var-expansion-84b81f26-33af-4279-ac7e-25fce782c03b no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 + Mar 7 02:26:01.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "var-expansion-1479" for this suite. 03/07/23 02:26:01.006 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:26:01.011 +Mar 7 02:26:01.012: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename dns 03/07/23 02:26:01.012 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:26:01.025 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:26:01.03 +[It] should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 +STEP: Creating a test externalName service 03/07/23 02:26:01.032 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:01.036 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:01.036 +STEP: creating a pod to probe DNS 03/07/23 02:26:01.036 +STEP: submitting the pod to kubernetes 03/07/23 02:26:01.036 +Mar 7 02:26:01.094: INFO: Waiting up to 15m0s for pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392" in namespace "dns-2467" to be "running" +Mar 7 02:26:01.101: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Pending", Reason="", readiness=false. Elapsed: 7.215056ms +Mar 7 02:26:03.106: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011620703s +Mar 7 02:26:05.105: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010667416s +Mar 7 02:26:07.105: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010629581s +Mar 7 02:26:09.106: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011355743s +Mar 7 02:26:11.105: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Running", Reason="", readiness=true. Elapsed: 10.01078476s +Mar 7 02:26:11.105: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392" satisfied condition "running" +STEP: retrieving the pod 03/07/23 02:26:11.105 +STEP: looking for the results for each expected name from probers 03/07/23 02:26:11.107 +Mar 7 02:26:11.113: INFO: DNS probes using dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392 succeeded + +STEP: deleting the pod 03/07/23 02:26:11.113 +STEP: changing the externalName to bar.example.com 03/07/23 02:26:11.156 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:11.163 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:11.163 +STEP: creating a second pod to probe DNS 03/07/23 02:26:11.163 +STEP: submitting the pod to kubernetes 03/07/23 02:26:11.163 +Mar 7 02:26:11.167: INFO: Waiting up to 15m0s for pod "dns-test-c8b46995-a340-4ee2-9787-7433ee675a20" in namespace "dns-2467" to be "running" +Mar 7 02:26:11.176: INFO: Pod "dns-test-c8b46995-a340-4ee2-9787-7433ee675a20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.639415ms +Mar 7 02:26:13.180: INFO: Pod "dns-test-c8b46995-a340-4ee2-9787-7433ee675a20": Phase="Running", Reason="", readiness=true. Elapsed: 2.012761965s +Mar 7 02:26:13.180: INFO: Pod "dns-test-c8b46995-a340-4ee2-9787-7433ee675a20" satisfied condition "running" +STEP: retrieving the pod 03/07/23 02:26:13.18 +STEP: looking for the results for each expected name from probers 03/07/23 02:26:13.183 +Mar 7 02:26:13.186: INFO: File wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. +' instead of 'bar.example.com.' +Mar 7 02:26:13.188: INFO: File jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. +' instead of 'bar.example.com.' +Mar 7 02:26:13.188: INFO: Lookups using dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 failed for: [wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local] + +Mar 7 02:26:18.197: INFO: File wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. +' instead of 'bar.example.com.' +Mar 7 02:26:18.203: INFO: File jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. +' instead of 'bar.example.com.' +Mar 7 02:26:18.203: INFO: Lookups using dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 failed for: [wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local] + +Mar 7 02:26:23.192: INFO: File wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. +' instead of 'bar.example.com.' +Mar 7 02:26:23.196: INFO: File jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. +' instead of 'bar.example.com.' +Mar 7 02:26:23.196: INFO: Lookups using dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 failed for: [wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local] + +Mar 7 02:26:28.196: INFO: File wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. +' instead of 'bar.example.com.' +Mar 7 02:26:28.199: INFO: File jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. +' instead of 'bar.example.com.' +Mar 7 02:26:28.199: INFO: Lookups using dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 failed for: [wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local] + +Mar 7 02:26:33.199: INFO: File jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. +' instead of 'bar.example.com.' +Mar 7 02:26:33.199: INFO: Lookups using dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 failed for: [jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local] + +Mar 7 02:26:38.232: INFO: DNS probes using dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 succeeded + +STEP: deleting the pod 03/07/23 02:26:38.232 +STEP: changing the service to type=ClusterIP 03/07/23 02:26:38.25 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:38.278 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:38.278 +STEP: creating a third pod to probe DNS 03/07/23 02:26:38.278 +STEP: submitting the pod to kubernetes 03/07/23 02:26:38.281 +Mar 7 02:26:38.291: INFO: Waiting up to 15m0s for pod "dns-test-b469dc16-4555-45b9-8c7b-4d4fb9675ba6" in namespace "dns-2467" to be "running" +Mar 7 02:26:38.299: INFO: Pod "dns-test-b469dc16-4555-45b9-8c7b-4d4fb9675ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175328ms +Mar 7 02:26:40.302: INFO: Pod "dns-test-b469dc16-4555-45b9-8c7b-4d4fb9675ba6": Phase="Running", Reason="", readiness=true. Elapsed: 2.011174256s +Mar 7 02:26:40.302: INFO: Pod "dns-test-b469dc16-4555-45b9-8c7b-4d4fb9675ba6" satisfied condition "running" +STEP: retrieving the pod 03/07/23 02:26:40.302 +STEP: looking for the results for each expected name from probers 03/07/23 02:26:40.304 +Mar 7 02:26:40.310: INFO: DNS probes using dns-test-b469dc16-4555-45b9-8c7b-4d4fb9675ba6 succeeded + +STEP: deleting the pod 03/07/23 02:26:40.31 +STEP: deleting the test externalName service 03/07/23 02:26:40.321 +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +Mar 7 02:26:40.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-2467" for this suite. 03/07/23 02:26:40.342 +{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","completed":5,"skipped":50,"failed":0} +------------------------------ +• [SLOW TEST] [39.341 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:26:01.011 + Mar 7 02:26:01.012: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename dns 03/07/23 02:26:01.012 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:26:01.025 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:26:01.03 + [It] should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 + STEP: Creating a test externalName service 03/07/23 02:26:01.032 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:01.036 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:01.036 + STEP: creating a pod to probe DNS 03/07/23 02:26:01.036 + STEP: submitting the pod to kubernetes 03/07/23 02:26:01.036 + Mar 7 02:26:01.094: INFO: Waiting up to 15m0s for pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392" in namespace "dns-2467" to be "running" + Mar 7 02:26:01.101: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Pending", Reason="", readiness=false. Elapsed: 7.215056ms + Mar 7 02:26:03.106: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011620703s + Mar 7 02:26:05.105: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010667416s + Mar 7 02:26:07.105: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010629581s + Mar 7 02:26:09.106: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011355743s + Mar 7 02:26:11.105: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392": Phase="Running", Reason="", readiness=true. Elapsed: 10.01078476s + Mar 7 02:26:11.105: INFO: Pod "dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392" satisfied condition "running" + STEP: retrieving the pod 03/07/23 02:26:11.105 + STEP: looking for the results for each expected name from probers 03/07/23 02:26:11.107 + Mar 7 02:26:11.113: INFO: DNS probes using dns-test-d38692cd-1b54-4a4c-ba17-71c230de4392 succeeded + + STEP: deleting the pod 03/07/23 02:26:11.113 + STEP: changing the externalName to bar.example.com 03/07/23 02:26:11.156 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:11.163 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:11.163 + STEP: creating a second pod to probe DNS 03/07/23 02:26:11.163 + STEP: submitting the pod to kubernetes 03/07/23 02:26:11.163 + Mar 7 02:26:11.167: INFO: Waiting up to 15m0s for pod "dns-test-c8b46995-a340-4ee2-9787-7433ee675a20" in namespace "dns-2467" to be "running" + Mar 7 02:26:11.176: INFO: Pod "dns-test-c8b46995-a340-4ee2-9787-7433ee675a20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.639415ms + Mar 7 02:26:13.180: INFO: Pod "dns-test-c8b46995-a340-4ee2-9787-7433ee675a20": Phase="Running", Reason="", readiness=true. Elapsed: 2.012761965s + Mar 7 02:26:13.180: INFO: Pod "dns-test-c8b46995-a340-4ee2-9787-7433ee675a20" satisfied condition "running" + STEP: retrieving the pod 03/07/23 02:26:13.18 + STEP: looking for the results for each expected name from probers 03/07/23 02:26:13.183 + Mar 7 02:26:13.186: INFO: File wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. + ' instead of 'bar.example.com.' + Mar 7 02:26:13.188: INFO: File jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. + ' instead of 'bar.example.com.' + Mar 7 02:26:13.188: INFO: Lookups using dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 failed for: [wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local] + + Mar 7 02:26:18.197: INFO: File wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. + ' instead of 'bar.example.com.' + Mar 7 02:26:18.203: INFO: File jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. + ' instead of 'bar.example.com.' + Mar 7 02:26:18.203: INFO: Lookups using dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 failed for: [wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local] + + Mar 7 02:26:23.192: INFO: File wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. + ' instead of 'bar.example.com.' + Mar 7 02:26:23.196: INFO: File jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. + ' instead of 'bar.example.com.' + Mar 7 02:26:23.196: INFO: Lookups using dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 failed for: [wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local] + + Mar 7 02:26:28.196: INFO: File wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. + ' instead of 'bar.example.com.' + Mar 7 02:26:28.199: INFO: File jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. + ' instead of 'bar.example.com.' + Mar 7 02:26:28.199: INFO: Lookups using dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 failed for: [wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local] + + Mar 7 02:26:33.199: INFO: File jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local from pod dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 contains 'foo.example.com. + ' instead of 'bar.example.com.' + Mar 7 02:26:33.199: INFO: Lookups using dns-2467/dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 failed for: [jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local] + + Mar 7 02:26:38.232: INFO: DNS probes using dns-test-c8b46995-a340-4ee2-9787-7433ee675a20 succeeded + + STEP: deleting the pod 03/07/23 02:26:38.232 + STEP: changing the service to type=ClusterIP 03/07/23 02:26:38.25 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:38.278 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2467.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2467.svc.cluster.local; sleep 1; done + 03/07/23 02:26:38.278 + STEP: creating a third pod to probe DNS 03/07/23 02:26:38.278 + STEP: submitting the pod to kubernetes 03/07/23 02:26:38.281 + Mar 7 02:26:38.291: INFO: Waiting up to 15m0s for pod "dns-test-b469dc16-4555-45b9-8c7b-4d4fb9675ba6" in namespace "dns-2467" to be "running" + Mar 7 02:26:38.299: INFO: Pod "dns-test-b469dc16-4555-45b9-8c7b-4d4fb9675ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175328ms + Mar 7 02:26:40.302: INFO: Pod "dns-test-b469dc16-4555-45b9-8c7b-4d4fb9675ba6": Phase="Running", Reason="", readiness=true. Elapsed: 2.011174256s + Mar 7 02:26:40.302: INFO: Pod "dns-test-b469dc16-4555-45b9-8c7b-4d4fb9675ba6" satisfied condition "running" + STEP: retrieving the pod 03/07/23 02:26:40.302 + STEP: looking for the results for each expected name from probers 03/07/23 02:26:40.304 + Mar 7 02:26:40.310: INFO: DNS probes using dns-test-b469dc16-4555-45b9-8c7b-4d4fb9675ba6 succeeded + + STEP: deleting the pod 03/07/23 02:26:40.31 + STEP: deleting the test externalName service 03/07/23 02:26:40.321 + [AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 + Mar 7 02:26:40.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "dns-2467" for this suite. 03/07/23 02:26:40.342 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:26:40.353 +Mar 7 02:26:40.353: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename subpath 03/07/23 02:26:40.354 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:26:40.369 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:26:40.373 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 03/07/23 02:26:40.376 +[It] should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 +STEP: Creating pod pod-subpath-test-secret-znf6 03/07/23 02:26:40.395 +STEP: Creating a pod to test atomic-volume-subpath 03/07/23 02:26:40.395 +Mar 7 02:26:40.402: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-znf6" in namespace "subpath-6003" to be "Succeeded or Failed" +Mar 7 02:26:40.407: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.611927ms +Mar 7 02:26:42.409: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 2.007257509s +Mar 7 02:26:44.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 4.007688931s +Mar 7 02:26:46.409: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 6.007230239s +Mar 7 02:26:48.409: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 8.007437945s +Mar 7 02:26:50.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 10.008150472s +Mar 7 02:26:52.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 12.007747818s +Mar 7 02:26:54.411: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 14.009209899s +Mar 7 02:26:56.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 16.007805218s +Mar 7 02:26:58.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 18.007829934s +Mar 7 02:27:00.428: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 20.025747537s +Mar 7 02:27:02.412: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=false. Elapsed: 22.009882737s +Mar 7 02:27:04.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.008089053s +STEP: Saw pod success 03/07/23 02:27:04.41 +Mar 7 02:27:04.410: INFO: Pod "pod-subpath-test-secret-znf6" satisfied condition "Succeeded or Failed" +Mar 7 02:27:04.413: INFO: Trying to get logs from node node-2 pod pod-subpath-test-secret-znf6 container test-container-subpath-secret-znf6: +STEP: delete the pod 03/07/23 02:27:04.418 +Mar 7 02:27:04.431: INFO: Waiting for pod pod-subpath-test-secret-znf6 to disappear +Mar 7 02:27:04.434: INFO: Pod pod-subpath-test-secret-znf6 no longer exists +STEP: Deleting pod pod-subpath-test-secret-znf6 03/07/23 02:27:04.434 +Mar 7 02:27:04.434: INFO: Deleting pod "pod-subpath-test-secret-znf6" in namespace "subpath-6003" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 +Mar 7 02:27:04.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-6003" for this suite. 03/07/23 02:27:04.444 +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]","completed":6,"skipped":52,"failed":0} +------------------------------ +• [SLOW TEST] [24.098 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:26:40.353 + Mar 7 02:26:40.353: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename subpath 03/07/23 02:26:40.354 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:26:40.369 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:26:40.373 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 03/07/23 02:26:40.376 + [It] should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 + STEP: Creating pod pod-subpath-test-secret-znf6 03/07/23 02:26:40.395 + STEP: Creating a pod to test atomic-volume-subpath 03/07/23 02:26:40.395 + Mar 7 02:26:40.402: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-znf6" in namespace "subpath-6003" to be "Succeeded or Failed" + Mar 7 02:26:40.407: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.611927ms + Mar 7 02:26:42.409: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 2.007257509s + Mar 7 02:26:44.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 4.007688931s + Mar 7 02:26:46.409: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 6.007230239s + Mar 7 02:26:48.409: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 8.007437945s + Mar 7 02:26:50.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 10.008150472s + Mar 7 02:26:52.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 12.007747818s + Mar 7 02:26:54.411: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 14.009209899s + Mar 7 02:26:56.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 16.007805218s + Mar 7 02:26:58.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 18.007829934s + Mar 7 02:27:00.428: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=true. Elapsed: 20.025747537s + Mar 7 02:27:02.412: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Running", Reason="", readiness=false. Elapsed: 22.009882737s + Mar 7 02:27:04.410: INFO: Pod "pod-subpath-test-secret-znf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.008089053s + STEP: Saw pod success 03/07/23 02:27:04.41 + Mar 7 02:27:04.410: INFO: Pod "pod-subpath-test-secret-znf6" satisfied condition "Succeeded or Failed" + Mar 7 02:27:04.413: INFO: Trying to get logs from node node-2 pod pod-subpath-test-secret-znf6 container test-container-subpath-secret-znf6: + STEP: delete the pod 03/07/23 02:27:04.418 + Mar 7 02:27:04.431: INFO: Waiting for pod pod-subpath-test-secret-znf6 to disappear + Mar 7 02:27:04.434: INFO: Pod pod-subpath-test-secret-znf6 no longer exists + STEP: Deleting pod pod-subpath-test-secret-znf6 03/07/23 02:27:04.434 + Mar 7 02:27:04.434: INFO: Deleting pod "pod-subpath-test-secret-znf6" in namespace "subpath-6003" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 + Mar 7 02:27:04.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "subpath-6003" for this suite. 03/07/23 02:27:04.444 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:267 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:27:04.453 +Mar 7 02:27:04.453: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename namespaces 03/07/23 02:27:04.454 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:04.47 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:04.474 +[It] should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:267 +STEP: creating a Namespace 03/07/23 02:27:04.476 +STEP: patching the Namespace 03/07/23 02:27:04.489 +STEP: get the Namespace and ensuring it has the label 03/07/23 02:27:04.497 +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 +Mar 7 02:27:04.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-242" for this suite. 03/07/23 02:27:04.504 +STEP: Destroying namespace "nspatchtest-46be9085-e3fa-425d-911d-25a7f76bbd4b-4462" for this suite. 03/07/23 02:27:04.508 +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","completed":7,"skipped":74,"failed":0} +------------------------------ +• [0.063 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:267 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:27:04.453 + Mar 7 02:27:04.453: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename namespaces 03/07/23 02:27:04.454 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:04.47 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:04.474 + [It] should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:267 + STEP: creating a Namespace 03/07/23 02:27:04.476 + STEP: patching the Namespace 03/07/23 02:27:04.489 + STEP: get the Namespace and ensuring it has the label 03/07/23 02:27:04.497 + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 + Mar 7 02:27:04.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "namespaces-242" for this suite. 03/07/23 02:27:04.504 + STEP: Destroying namespace "nspatchtest-46be9085-e3fa-425d-911d-25a7f76bbd4b-4462" for this suite. 03/07/23 02:27:04.508 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:65 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:27:04.516 +Mar 7 02:27:04.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename endpointslice 03/07/23 02:27:04.517 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:04.528 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:04.53 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:65 +Mar 7 02:27:04.539: INFO: Endpoints addresses: [192.168.1.100 192.168.1.101 192.168.1.102] , ports: [6443] +Mar 7 02:27:04.539: INFO: EndpointSlices addresses: [192.168.1.100 192.168.1.101 192.168.1.102] , ports: [6443] +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 +Mar 7 02:27:04.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-509" for this suite. 03/07/23 02:27:04.541 +{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","completed":8,"skipped":92,"failed":0} +------------------------------ +• [0.031 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:65 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:27:04.516 + Mar 7 02:27:04.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename endpointslice 03/07/23 02:27:04.517 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:04.528 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:04.53 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 + [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:65 + Mar 7 02:27:04.539: INFO: Endpoints addresses: [192.168.1.100 192.168.1.101 192.168.1.102] , ports: [6443] + Mar 7 02:27:04.539: INFO: EndpointSlices addresses: [192.168.1.100 192.168.1.101 192.168.1.102] , ports: [6443] + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 + Mar 7 02:27:04.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "endpointslice-509" for this suite. 03/07/23 02:27:04.541 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2189 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:27:04.549 +Mar 7 02:27:04.549: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 02:27:04.549 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:04.564 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:04.567 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2189 +STEP: creating service in namespace services-4073 03/07/23 02:27:04.569 +STEP: creating service affinity-clusterip-transition in namespace services-4073 03/07/23 02:27:04.569 +STEP: creating replication controller affinity-clusterip-transition in namespace services-4073 03/07/23 02:27:04.583 +I0307 02:27:04.591879 22 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-4073, replica count: 3 +I0307 02:27:07.644275 22 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0307 02:27:10.645240 22 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0307 02:27:13.647693 22 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 02:27:13.684: INFO: Creating new exec pod +Mar 7 02:27:13.690: INFO: Waiting up to 5m0s for pod "execpod-affinity4lz4f" in namespace "services-4073" to be "running" +Mar 7 02:27:13.698: INFO: Pod "execpod-affinity4lz4f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.945952ms +Mar 7 02:27:15.727: INFO: Pod "execpod-affinity4lz4f": Phase="Running", Reason="", readiness=true. Elapsed: 2.036989295s +Mar 7 02:27:15.727: INFO: Pod "execpod-affinity4lz4f" satisfied condition "running" +Mar 7 02:27:16.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-4073 exec execpod-affinity4lz4f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' +Mar 7 02:27:16.956: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Mar 7 02:27:16.956: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 02:27:16.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-4073 exec execpod-affinity4lz4f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.106.78.99 80' +Mar 7 02:27:17.166: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.106.78.99 80\nConnection to 10.106.78.99 80 port [tcp/http] succeeded!\n" +Mar 7 02:27:17.166: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 02:27:17.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-4073 exec execpod-affinity4lz4f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.78.99:80/ ; done' +Mar 7 02:27:17.433: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n" +Mar 7 02:27:17.433: INFO: stdout: "\naffinity-clusterip-transition-qvckr\naffinity-clusterip-transition-qvckr\naffinity-clusterip-transition-qvckr\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-l84t4\naffinity-clusterip-transition-l84t4\naffinity-clusterip-transition-l84t4\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qvckr\naffinity-clusterip-transition-qvckr\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-l84t4\naffinity-clusterip-transition-l84t4\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-l84t4" +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qvckr +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qvckr +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qvckr +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qvckr +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qvckr +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 +Mar 7 02:27:17.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-4073 exec execpod-affinity4lz4f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.78.99:80/ ; done' +Mar 7 02:27:17.677: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n" +Mar 7 02:27:17.677: INFO: stdout: "\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg" +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg +Mar 7 02:27:17.677: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-4073, will wait for the garbage collector to delete the pods 03/07/23 02:27:17.689 +Mar 7 02:27:17.749: INFO: Deleting ReplicationController affinity-clusterip-transition took: 4.701208ms +Mar 7 02:27:17.850: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.55ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 02:27:20.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4073" for this suite. 03/07/23 02:27:20.071 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","completed":9,"skipped":127,"failed":0} +------------------------------ +• [SLOW TEST] [15.527 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2189 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:27:04.549 + Mar 7 02:27:04.549: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 02:27:04.549 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:04.564 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:04.567 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2189 + STEP: creating service in namespace services-4073 03/07/23 02:27:04.569 + STEP: creating service affinity-clusterip-transition in namespace services-4073 03/07/23 02:27:04.569 + STEP: creating replication controller affinity-clusterip-transition in namespace services-4073 03/07/23 02:27:04.583 + I0307 02:27:04.591879 22 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-4073, replica count: 3 + I0307 02:27:07.644275 22 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + I0307 02:27:10.645240 22 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + I0307 02:27:13.647693 22 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 02:27:13.684: INFO: Creating new exec pod + Mar 7 02:27:13.690: INFO: Waiting up to 5m0s for pod "execpod-affinity4lz4f" in namespace "services-4073" to be "running" + Mar 7 02:27:13.698: INFO: Pod "execpod-affinity4lz4f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.945952ms + Mar 7 02:27:15.727: INFO: Pod "execpod-affinity4lz4f": Phase="Running", Reason="", readiness=true. Elapsed: 2.036989295s + Mar 7 02:27:15.727: INFO: Pod "execpod-affinity4lz4f" satisfied condition "running" + Mar 7 02:27:16.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-4073 exec execpod-affinity4lz4f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' + Mar 7 02:27:16.956: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" + Mar 7 02:27:16.956: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 02:27:16.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-4073 exec execpod-affinity4lz4f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.106.78.99 80' + Mar 7 02:27:17.166: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.106.78.99 80\nConnection to 10.106.78.99 80 port [tcp/http] succeeded!\n" + Mar 7 02:27:17.166: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 02:27:17.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-4073 exec execpod-affinity4lz4f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.78.99:80/ ; done' + Mar 7 02:27:17.433: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n" + Mar 7 02:27:17.433: INFO: stdout: "\naffinity-clusterip-transition-qvckr\naffinity-clusterip-transition-qvckr\naffinity-clusterip-transition-qvckr\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-l84t4\naffinity-clusterip-transition-l84t4\naffinity-clusterip-transition-l84t4\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qvckr\naffinity-clusterip-transition-qvckr\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-l84t4\naffinity-clusterip-transition-l84t4\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-l84t4" + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qvckr + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qvckr + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qvckr + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qvckr + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qvckr + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.434: INFO: Received response from host: affinity-clusterip-transition-l84t4 + Mar 7 02:27:17.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-4073 exec execpod-affinity4lz4f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.78.99:80/ ; done' + Mar 7 02:27:17.677: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.78.99:80/\n" + Mar 7 02:27:17.677: INFO: stdout: "\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg\naffinity-clusterip-transition-qxmgg" + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Received response from host: affinity-clusterip-transition-qxmgg + Mar 7 02:27:17.677: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-4073, will wait for the garbage collector to delete the pods 03/07/23 02:27:17.689 + Mar 7 02:27:17.749: INFO: Deleting ReplicationController affinity-clusterip-transition took: 4.701208ms + Mar 7 02:27:17.850: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.55ms + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 02:27:20.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-4073" for this suite. 03/07/23 02:27:20.071 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:100 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:27:20.076 +Mar 7 02:27:20.076: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replication-controller 03/07/23 02:27:20.077 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:20.089 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:20.09 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 +[It] should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:100 +STEP: Given a ReplicationController is created 03/07/23 02:27:20.092 +STEP: When the matched label of one of its pods change 03/07/23 02:27:20.098 +Mar 7 02:27:20.100: INFO: Pod name pod-release: Found 0 pods out of 1 +Mar 7 02:27:25.114: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released 03/07/23 02:27:25.13 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 +Mar 7 02:27:25.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-6000" for this suite. 03/07/23 02:27:25.147 +{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","completed":10,"skipped":137,"failed":0} +------------------------------ +• [SLOW TEST] [5.075 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:100 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:27:20.076 + Mar 7 02:27:20.076: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replication-controller 03/07/23 02:27:20.077 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:20.089 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:20.09 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 + [It] should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:100 + STEP: Given a ReplicationController is created 03/07/23 02:27:20.092 + STEP: When the matched label of one of its pods change 03/07/23 02:27:20.098 + Mar 7 02:27:20.100: INFO: Pod name pod-release: Found 0 pods out of 1 + Mar 7 02:27:25.114: INFO: Pod name pod-release: Found 1 pods out of 1 + STEP: Then the pod is released 03/07/23 02:27:25.13 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 + Mar 7 02:27:25.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replication-controller-6000" for this suite. 03/07/23 02:27:25.147 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] CSIStorageCapacity + should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 +[BeforeEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:27:25.152 +Mar 7 02:27:25.152: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename csistoragecapacity 03/07/23 02:27:25.152 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:25.171 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:25.173 +[It] should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 +STEP: getting /apis 03/07/23 02:27:25.175 +STEP: getting /apis/storage.k8s.io 03/07/23 02:27:25.176 +STEP: getting /apis/storage.k8s.io/v1 03/07/23 02:27:25.178 +STEP: creating 03/07/23 02:27:25.179 +STEP: watching 03/07/23 02:27:25.191 +Mar 7 02:27:25.191: INFO: starting watch +STEP: getting 03/07/23 02:27:25.195 +STEP: listing in namespace 03/07/23 02:27:25.199 +STEP: listing across namespaces 03/07/23 02:27:25.201 +STEP: patching 03/07/23 02:27:25.203 +STEP: updating 03/07/23 02:27:25.207 +Mar 7 02:27:25.211: INFO: waiting for watch events with expected annotations in namespace +Mar 7 02:27:25.211: INFO: waiting for watch events with expected annotations across namespace +STEP: deleting 03/07/23 02:27:25.211 +STEP: deleting a collection 03/07/23 02:27:25.222 +[AfterEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/framework.go:187 +Mar 7 02:27:25.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "csistoragecapacity-3737" for this suite. 03/07/23 02:27:25.238 +{"msg":"PASSED [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]","completed":11,"skipped":138,"failed":0} +------------------------------ +• [0.092 seconds] +[sig-storage] CSIStorageCapacity +test/e2e/storage/utils/framework.go:23 + should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:27:25.152 + Mar 7 02:27:25.152: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename csistoragecapacity 03/07/23 02:27:25.152 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:25.171 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:25.173 + [It] should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 + STEP: getting /apis 03/07/23 02:27:25.175 + STEP: getting /apis/storage.k8s.io 03/07/23 02:27:25.176 + STEP: getting /apis/storage.k8s.io/v1 03/07/23 02:27:25.178 + STEP: creating 03/07/23 02:27:25.179 + STEP: watching 03/07/23 02:27:25.191 + Mar 7 02:27:25.191: INFO: starting watch + STEP: getting 03/07/23 02:27:25.195 + STEP: listing in namespace 03/07/23 02:27:25.199 + STEP: listing across namespaces 03/07/23 02:27:25.201 + STEP: patching 03/07/23 02:27:25.203 + STEP: updating 03/07/23 02:27:25.207 + Mar 7 02:27:25.211: INFO: waiting for watch events with expected annotations in namespace + Mar 7 02:27:25.211: INFO: waiting for watch events with expected annotations across namespace + STEP: deleting 03/07/23 02:27:25.211 + STEP: deleting a collection 03/07/23 02:27:25.222 + [AfterEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/framework.go:187 + Mar 7 02:27:25.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "csistoragecapacity-3737" for this suite. 03/07/23 02:27:25.238 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:46 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:27:25.245 +Mar 7 02:27:25.245: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 02:27:25.247 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:25.26 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:25.262 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:46 +STEP: Creating secret with name secret-test-08d09a8b-8c6b-443c-9b55-9cc5d0afbd06 03/07/23 02:27:25.263 +STEP: Creating a pod to test consume secrets 03/07/23 02:27:25.268 +Mar 7 02:27:25.282: INFO: Waiting up to 5m0s for pod "pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a" in namespace "secrets-3409" to be "Succeeded or Failed" +Mar 7 02:27:25.285: INFO: Pod "pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.140834ms +Mar 7 02:27:27.289: INFO: Pod "pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00670322s +Mar 7 02:27:29.288: INFO: Pod "pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00649945s +STEP: Saw pod success 03/07/23 02:27:29.288 +Mar 7 02:27:29.288: INFO: Pod "pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a" satisfied condition "Succeeded or Failed" +Mar 7 02:27:29.291: INFO: Trying to get logs from node node-2 pod pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a container secret-volume-test: +STEP: delete the pod 03/07/23 02:27:29.295 +Mar 7 02:27:29.302: INFO: Waiting for pod pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a to disappear +Mar 7 02:27:29.304: INFO: Pod pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +Mar 7 02:27:29.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3409" for this suite. 03/07/23 02:27:29.307 +{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","completed":12,"skipped":154,"failed":0} +------------------------------ +• [4.066 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:46 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:27:25.245 + Mar 7 02:27:25.245: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 02:27:25.247 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:25.26 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:25.262 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:46 + STEP: Creating secret with name secret-test-08d09a8b-8c6b-443c-9b55-9cc5d0afbd06 03/07/23 02:27:25.263 + STEP: Creating a pod to test consume secrets 03/07/23 02:27:25.268 + Mar 7 02:27:25.282: INFO: Waiting up to 5m0s for pod "pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a" in namespace "secrets-3409" to be "Succeeded or Failed" + Mar 7 02:27:25.285: INFO: Pod "pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.140834ms + Mar 7 02:27:27.289: INFO: Pod "pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00670322s + Mar 7 02:27:29.288: INFO: Pod "pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00649945s + STEP: Saw pod success 03/07/23 02:27:29.288 + Mar 7 02:27:29.288: INFO: Pod "pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a" satisfied condition "Succeeded or Failed" + Mar 7 02:27:29.291: INFO: Trying to get logs from node node-2 pod pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a container secret-volume-test: + STEP: delete the pod 03/07/23 02:27:29.295 + Mar 7 02:27:29.302: INFO: Waiting for pod pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a to disappear + Mar 7 02:27:29.304: INFO: Pod pod-secrets-fac6ed5a-1320-4e86-af8c-f7875682ef9a no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 + Mar 7 02:27:29.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-3409" for this suite. 03/07/23 02:27:29.307 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1268 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:27:29.311 +Mar 7 02:27:29.311: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 02:27:29.311 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:29.323 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:29.325 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1268 +STEP: creating service nodeport-test with type=NodePort in namespace services-3626 03/07/23 02:27:29.327 +STEP: creating replication controller nodeport-test in namespace services-3626 03/07/23 02:27:29.343 +I0307 02:27:29.349190 22 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-3626, replica count: 2 +I0307 02:27:32.400159 22 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 02:27:32.400: INFO: Creating new exec pod +Mar 7 02:27:32.427: INFO: Waiting up to 5m0s for pod "execpod9ptsm" in namespace "services-3626" to be "running" +Mar 7 02:27:32.430: INFO: Pod "execpod9ptsm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.097595ms +Mar 7 02:27:34.433: INFO: Pod "execpod9ptsm": Phase="Running", Reason="", readiness=true. Elapsed: 2.006094312s +Mar 7 02:27:34.433: INFO: Pod "execpod9ptsm" satisfied condition "running" +Mar 7 02:27:35.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Mar 7 02:27:35.612: INFO: stderr: "+ nc -v+ -t -w 2 nodeport-test 80echo\n hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Mar 7 02:27:35.612: INFO: stdout: "" +Mar 7 02:27:36.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Mar 7 02:27:36.789: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Mar 7 02:27:36.789: INFO: stdout: "nodeport-test-4dcrt" +Mar 7 02:27:36.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.108.164 80' +Mar 7 02:27:36.961: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.108.164 80\nConnection to 10.96.108.164 80 port [tcp/http] succeeded!\n" +Mar 7 02:27:36.962: INFO: stdout: "" +Mar 7 02:27:37.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.108.164 80' +Mar 7 02:27:38.137: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.108.164 80\nConnection to 10.96.108.164 80 port [tcp/http] succeeded!\n" +Mar 7 02:27:38.137: INFO: stdout: "nodeport-test-4bl9t" +Mar 7 02:27:38.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.102 32223' +Mar 7 02:27:38.324: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.102 32223\nConnection to 192.168.1.102 32223 port [tcp/*] succeeded!\n" +Mar 7 02:27:38.324: INFO: stdout: "nodeport-test-4dcrt" +Mar 7 02:27:38.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.100 32223' +Mar 7 02:27:38.519: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.100 32223\nConnection to 192.168.1.100 32223 port [tcp/*] succeeded!\n" +Mar 7 02:27:38.519: INFO: stdout: "nodeport-test-4dcrt" +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 02:27:38.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3626" for this suite. 03/07/23 02:27:38.524 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","completed":13,"skipped":157,"failed":0} +------------------------------ +• [SLOW TEST] [9.218 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1268 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:27:29.311 + Mar 7 02:27:29.311: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 02:27:29.311 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:29.323 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:29.325 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1268 + STEP: creating service nodeport-test with type=NodePort in namespace services-3626 03/07/23 02:27:29.327 + STEP: creating replication controller nodeport-test in namespace services-3626 03/07/23 02:27:29.343 + I0307 02:27:29.349190 22 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-3626, replica count: 2 + I0307 02:27:32.400159 22 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 02:27:32.400: INFO: Creating new exec pod + Mar 7 02:27:32.427: INFO: Waiting up to 5m0s for pod "execpod9ptsm" in namespace "services-3626" to be "running" + Mar 7 02:27:32.430: INFO: Pod "execpod9ptsm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.097595ms + Mar 7 02:27:34.433: INFO: Pod "execpod9ptsm": Phase="Running", Reason="", readiness=true. Elapsed: 2.006094312s + Mar 7 02:27:34.433: INFO: Pod "execpod9ptsm" satisfied condition "running" + Mar 7 02:27:35.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' + Mar 7 02:27:35.612: INFO: stderr: "+ nc -v+ -t -w 2 nodeport-test 80echo\n hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" + Mar 7 02:27:35.612: INFO: stdout: "" + Mar 7 02:27:36.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' + Mar 7 02:27:36.789: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" + Mar 7 02:27:36.789: INFO: stdout: "nodeport-test-4dcrt" + Mar 7 02:27:36.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.108.164 80' + Mar 7 02:27:36.961: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.108.164 80\nConnection to 10.96.108.164 80 port [tcp/http] succeeded!\n" + Mar 7 02:27:36.962: INFO: stdout: "" + Mar 7 02:27:37.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.108.164 80' + Mar 7 02:27:38.137: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.108.164 80\nConnection to 10.96.108.164 80 port [tcp/http] succeeded!\n" + Mar 7 02:27:38.137: INFO: stdout: "nodeport-test-4bl9t" + Mar 7 02:27:38.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.102 32223' + Mar 7 02:27:38.324: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.102 32223\nConnection to 192.168.1.102 32223 port [tcp/*] succeeded!\n" + Mar 7 02:27:38.324: INFO: stdout: "nodeport-test-4dcrt" + Mar 7 02:27:38.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3626 exec execpod9ptsm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.100 32223' + Mar 7 02:27:38.519: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.100 32223\nConnection to 192.168.1.100 32223 port [tcp/*] succeeded!\n" + Mar 7 02:27:38.519: INFO: stdout: "nodeport-test-4dcrt" + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 02:27:38.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-3626" for this suite. 03/07/23 02:27:38.524 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:27:38.529 +Mar 7 02:27:38.530: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename tables 03/07/23 02:27:38.53 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:38.542 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:38.545 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/apimachinery/table_conversion.go:49 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/framework.go:187 +Mar 7 02:27:38.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-1435" for this suite. 03/07/23 02:27:38.552 +{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","completed":14,"skipped":175,"failed":0} +------------------------------ +• [0.027 seconds] +[sig-api-machinery] Servers with support for Table transformation +test/e2e/apimachinery/framework.go:23 + should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:27:38.529 + Mar 7 02:27:38.530: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename tables 03/07/23 02:27:38.53 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:38.542 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:38.545 + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/apimachinery/table_conversion.go:49 + [It] should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 + [AfterEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/framework.go:187 + Mar 7 02:27:38.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "tables-1435" for this suite. 03/07/23 02:27:38.552 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:242 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:27:38.557 +Mar 7 02:27:38.557: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename namespaces 03/07/23 02:27:38.558 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:38.57 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:38.572 +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:242 +STEP: Creating a test namespace 03/07/23 02:27:38.574 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:38.585 +STEP: Creating a pod in the namespace 03/07/23 02:27:38.588 +STEP: Waiting for the pod to have running status 03/07/23 02:27:38.593 +Mar 7 02:27:38.593: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-9688" to be "running" +Mar 7 02:27:38.596: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.612308ms +Mar 7 02:27:40.616: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022674511s +Mar 7 02:27:42.603: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.010028062s +Mar 7 02:27:42.603: INFO: Pod "test-pod" satisfied condition "running" +STEP: Deleting the namespace 03/07/23 02:27:42.603 +STEP: Waiting for the namespace to be removed. 03/07/23 02:27:42.636 +STEP: Recreating the namespace 03/07/23 02:27:53.641 +STEP: Verifying there are no pods in the namespace 03/07/23 02:27:53.658 +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 +Mar 7 02:27:53.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-2811" for this suite. 03/07/23 02:27:53.666 +STEP: Destroying namespace "nsdeletetest-9688" for this suite. 03/07/23 02:27:53.672 +Mar 7 02:27:53.681: INFO: Namespace nsdeletetest-9688 was already deleted +STEP: Destroying namespace "nsdeletetest-5453" for this suite. 03/07/23 02:27:53.681 +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","completed":15,"skipped":186,"failed":0} +------------------------------ +• [SLOW TEST] [15.130 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:242 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:27:38.557 + Mar 7 02:27:38.557: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename namespaces 03/07/23 02:27:38.558 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:38.57 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:38.572 + [It] should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:242 + STEP: Creating a test namespace 03/07/23 02:27:38.574 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:38.585 + STEP: Creating a pod in the namespace 03/07/23 02:27:38.588 + STEP: Waiting for the pod to have running status 03/07/23 02:27:38.593 + Mar 7 02:27:38.593: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-9688" to be "running" + Mar 7 02:27:38.596: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.612308ms + Mar 7 02:27:40.616: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022674511s + Mar 7 02:27:42.603: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.010028062s + Mar 7 02:27:42.603: INFO: Pod "test-pod" satisfied condition "running" + STEP: Deleting the namespace 03/07/23 02:27:42.603 + STEP: Waiting for the namespace to be removed. 03/07/23 02:27:42.636 + STEP: Recreating the namespace 03/07/23 02:27:53.641 + STEP: Verifying there are no pods in the namespace 03/07/23 02:27:53.658 + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 + Mar 7 02:27:53.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "namespaces-2811" for this suite. 03/07/23 02:27:53.666 + STEP: Destroying namespace "nsdeletetest-9688" for this suite. 03/07/23 02:27:53.672 + Mar 7 02:27:53.681: INFO: Namespace nsdeletetest-9688 was already deleted + STEP: Destroying namespace "nsdeletetest-5453" for this suite. 03/07/23 02:27:53.681 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 +[BeforeEach] [sig-api-machinery] Aggregator + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:27:53.688 +Mar 7 02:27:53.688: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename aggregator 03/07/23 02:27:53.689 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:53.702 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:53.704 +[BeforeEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:78 +Mar 7 02:27:53.706: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 +STEP: Registering the sample API server. 03/07/23 02:27:53.707 +Mar 7 02:27:54.439: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Mar 7 02:27:56.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:27:58.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:28:00.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:28:02.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:28:04.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:28:06.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:28:08.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:28:10.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:28:12.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:28:14.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:28:16.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 02:28:18.675: INFO: Waited 131.785503ms for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com 03/07/23 02:28:18.733 +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' 03/07/23 02:28:18.736 +STEP: List APIServices 03/07/23 02:28:18.74 +Mar 7 02:28:18.746: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + test/e2e/framework/framework.go:187 +Mar 7 02:28:19.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-5373" for this suite. 03/07/23 02:28:19.262 +{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","completed":16,"skipped":216,"failed":0} +------------------------------ +• [SLOW TEST] [25.626 seconds] +[sig-api-machinery] Aggregator +test/e2e/apimachinery/framework.go:23 + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Aggregator + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:27:53.688 + Mar 7 02:27:53.688: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename aggregator 03/07/23 02:27:53.689 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:27:53.702 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:27:53.704 + [BeforeEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:78 + Mar 7 02:27:53.706: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 + STEP: Registering the sample API server. 03/07/23 02:27:53.707 + Mar 7 02:27:54.439: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set + Mar 7 02:27:56.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:27:58.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:28:00.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:28:02.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:28:04.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:28:06.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:28:08.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:28:10.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:28:12.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:28:14.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:28:16.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 2, 27, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 02:28:18.675: INFO: Waited 131.785503ms for the sample-apiserver to be ready to handle requests. + STEP: Read Status for v1alpha1.wardle.example.com 03/07/23 02:28:18.733 + STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' 03/07/23 02:28:18.736 + STEP: List APIServices 03/07/23 02:28:18.74 + Mar 7 02:28:18.746: INFO: Found v1alpha1.wardle.example.com in APIServiceList + [AfterEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:68 + [AfterEach] [sig-api-machinery] Aggregator + test/e2e/framework/framework.go:187 + Mar 7 02:28:19.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "aggregator-5373" for this suite. 03/07/23 02:28:19.262 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:581 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:28:19.314 +Mar 7 02:28:19.314: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 02:28:19.315 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:19.331 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:19.333 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 02:28:19.344 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 02:28:20.162 +STEP: Deploying the webhook pod 03/07/23 02:28:20.175 +STEP: Wait for the deployment to be ready 03/07/23 02:28:20.183 +Mar 7 02:28:20.190: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 02:28:22.199 +STEP: Verifying the service has paired with the endpoint 03/07/23 02:28:22.211 +Mar 7 02:28:23.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:581 +STEP: Listing all of the created validation webhooks 03/07/23 02:28:23.256 +STEP: Creating a configMap that does not comply to the validation webhook rules 03/07/23 02:28:23.291 +STEP: Deleting the collection of validation webhooks 03/07/23 02:28:23.316 +STEP: Creating a configMap that does not comply to the validation webhook rules 03/07/23 02:28:23.36 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 02:28:23.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3680" for this suite. 03/07/23 02:28:23.374 +STEP: Destroying namespace "webhook-3680-markers" for this suite. 03/07/23 02:28:23.38 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","completed":17,"skipped":218,"failed":0} +------------------------------ +• [4.128 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:581 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:28:19.314 + Mar 7 02:28:19.314: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 02:28:19.315 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:19.331 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:19.333 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 02:28:19.344 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 02:28:20.162 + STEP: Deploying the webhook pod 03/07/23 02:28:20.175 + STEP: Wait for the deployment to be ready 03/07/23 02:28:20.183 + Mar 7 02:28:20.190: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 02:28:22.199 + STEP: Verifying the service has paired with the endpoint 03/07/23 02:28:22.211 + Mar 7 02:28:23.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:581 + STEP: Listing all of the created validation webhooks 03/07/23 02:28:23.256 + STEP: Creating a configMap that does not comply to the validation webhook rules 03/07/23 02:28:23.291 + STEP: Deleting the collection of validation webhooks 03/07/23 02:28:23.316 + STEP: Creating a configMap that does not comply to the validation webhook rules 03/07/23 02:28:23.36 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 02:28:23.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-3680" for this suite. 03/07/23 02:28:23.374 + STEP: Destroying namespace "webhook-3680-markers" for this suite. 03/07/23 02:28:23.38 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:608 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:28:23.443 +Mar 7 02:28:23.443: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename security-context-test 03/07/23 02:28:23.444 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:23.5 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:23.505 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:49 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:608 +Mar 7 02:28:23.518: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10" in namespace "security-context-test-7532" to be "Succeeded or Failed" +Mar 7 02:28:23.532: INFO: Pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10": Phase="Pending", Reason="", readiness=false. Elapsed: 13.808707ms +Mar 7 02:28:25.535: INFO: Pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016760523s +Mar 7 02:28:27.536: INFO: Pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01835693s +Mar 7 02:28:29.535: INFO: Pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017361013s +Mar 7 02:28:29.535: INFO: Pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +Mar 7 02:28:29.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-7532" for this suite. 03/07/23 02:28:29.543 +{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","completed":18,"skipped":220,"failed":0} +------------------------------ +• [SLOW TEST] [6.105 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + when creating containers with AllowPrivilegeEscalation + test/e2e/common/node/security_context.go:554 + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:608 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:28:23.443 + Mar 7 02:28:23.443: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename security-context-test 03/07/23 02:28:23.444 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:23.5 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:23.505 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:49 + [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:608 + Mar 7 02:28:23.518: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10" in namespace "security-context-test-7532" to be "Succeeded or Failed" + Mar 7 02:28:23.532: INFO: Pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10": Phase="Pending", Reason="", readiness=false. Elapsed: 13.808707ms + Mar 7 02:28:25.535: INFO: Pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016760523s + Mar 7 02:28:27.536: INFO: Pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01835693s + Mar 7 02:28:29.535: INFO: Pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017361013s + Mar 7 02:28:29.535: INFO: Pod "alpine-nnp-false-6cc05a5b-92d8-4b3f-8d13-33a53d36de10" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 + Mar 7 02:28:29.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "security-context-test-7532" for this suite. 03/07/23 02:28:29.543 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:185 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:28:29.55 +Mar 7 02:28:29.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename var-expansion 03/07/23 02:28:29.551 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:29.561 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:29.563 +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:185 +Mar 7 02:28:29.570: INFO: Waiting up to 2m0s for pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850" in namespace "var-expansion-606" to be "container 0 failed with reason CreateContainerConfigError" +Mar 7 02:28:29.573: INFO: Pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904967ms +Mar 7 02:28:31.577: INFO: Pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006933262s +Mar 7 02:28:31.577: INFO: Pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850" satisfied condition "container 0 failed with reason CreateContainerConfigError" +Mar 7 02:28:31.577: INFO: Deleting pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850" in namespace "var-expansion-606" +Mar 7 02:28:31.582: INFO: Wait up to 5m0s for pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +Mar 7 02:28:35.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-606" for this suite. 03/07/23 02:28:35.592 +{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","completed":19,"skipped":244,"failed":0} +------------------------------ +• [SLOW TEST] [6.073 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:185 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:28:29.55 + Mar 7 02:28:29.550: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename var-expansion 03/07/23 02:28:29.551 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:29.561 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:29.563 + [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:185 + Mar 7 02:28:29.570: INFO: Waiting up to 2m0s for pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850" in namespace "var-expansion-606" to be "container 0 failed with reason CreateContainerConfigError" + Mar 7 02:28:29.573: INFO: Pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904967ms + Mar 7 02:28:31.577: INFO: Pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006933262s + Mar 7 02:28:31.577: INFO: Pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850" satisfied condition "container 0 failed with reason CreateContainerConfigError" + Mar 7 02:28:31.577: INFO: Deleting pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850" in namespace "var-expansion-606" + Mar 7 02:28:31.582: INFO: Wait up to 5m0s for pod "var-expansion-37b162cb-5a6b-404e-9364-49c4b3f7b850" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 + Mar 7 02:28:35.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "var-expansion-606" for this suite. 03/07/23 02:28:35.592 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should delete a collection of services [Conformance] + test/e2e/network/service.go:3641 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:28:35.624 +Mar 7 02:28:35.624: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 02:28:35.625 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:35.636 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:35.638 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should delete a collection of services [Conformance] + test/e2e/network/service.go:3641 +STEP: creating a collection of services 03/07/23 02:28:35.64 +Mar 7 02:28:35.640: INFO: Creating e2e-svc-a-gkww8 +Mar 7 02:28:35.653: INFO: Creating e2e-svc-b-44qzx +Mar 7 02:28:35.669: INFO: Creating e2e-svc-c-bkzq4 +STEP: deleting service collection 03/07/23 02:28:35.692 +Mar 7 02:28:35.744: INFO: Collection of services has been deleted +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 02:28:35.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7775" for this suite. 03/07/23 02:28:35.748 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","completed":20,"skipped":261,"failed":0} +------------------------------ +• [0.130 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should delete a collection of services [Conformance] + test/e2e/network/service.go:3641 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:28:35.624 + Mar 7 02:28:35.624: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 02:28:35.625 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:35.636 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:35.638 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should delete a collection of services [Conformance] + test/e2e/network/service.go:3641 + STEP: creating a collection of services 03/07/23 02:28:35.64 + Mar 7 02:28:35.640: INFO: Creating e2e-svc-a-gkww8 + Mar 7 02:28:35.653: INFO: Creating e2e-svc-b-44qzx + Mar 7 02:28:35.669: INFO: Creating e2e-svc-c-bkzq4 + STEP: deleting service collection 03/07/23 02:28:35.692 + Mar 7 02:28:35.744: INFO: Collection of services has been deleted + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 02:28:35.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-7775" for this suite. 03/07/23 02:28:35.748 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:72 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:28:35.755 +Mar 7 02:28:35.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename var-expansion 03/07/23 02:28:35.755 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:35.776 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:35.778 +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:72 +STEP: Creating a pod to test substitution in container's command 03/07/23 02:28:35.78 +Mar 7 02:28:35.788: INFO: Waiting up to 5m0s for pod "var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5" in namespace "var-expansion-9549" to be "Succeeded or Failed" +Mar 7 02:28:35.790: INFO: Pod "var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.773122ms +Mar 7 02:28:37.793: INFO: Pod "var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005686624s +Mar 7 02:28:39.797: INFO: Pod "var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009149591s +STEP: Saw pod success 03/07/23 02:28:39.797 +Mar 7 02:28:39.797: INFO: Pod "var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5" satisfied condition "Succeeded or Failed" +Mar 7 02:28:39.802: INFO: Trying to get logs from node node-2 pod var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5 container dapi-container: +STEP: delete the pod 03/07/23 02:28:39.809 +Mar 7 02:28:39.817: INFO: Waiting for pod var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5 to disappear +Mar 7 02:28:39.819: INFO: Pod var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5 no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +Mar 7 02:28:39.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9549" for this suite. 03/07/23 02:28:39.823 +{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","completed":21,"skipped":282,"failed":0} +------------------------------ +• [4.074 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:72 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:28:35.755 + Mar 7 02:28:35.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename var-expansion 03/07/23 02:28:35.755 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:35.776 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:35.778 + [It] should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:72 + STEP: Creating a pod to test substitution in container's command 03/07/23 02:28:35.78 + Mar 7 02:28:35.788: INFO: Waiting up to 5m0s for pod "var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5" in namespace "var-expansion-9549" to be "Succeeded or Failed" + Mar 7 02:28:35.790: INFO: Pod "var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.773122ms + Mar 7 02:28:37.793: INFO: Pod "var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005686624s + Mar 7 02:28:39.797: INFO: Pod "var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009149591s + STEP: Saw pod success 03/07/23 02:28:39.797 + Mar 7 02:28:39.797: INFO: Pod "var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5" satisfied condition "Succeeded or Failed" + Mar 7 02:28:39.802: INFO: Trying to get logs from node node-2 pod var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5 container dapi-container: + STEP: delete the pod 03/07/23 02:28:39.809 + Mar 7 02:28:39.817: INFO: Waiting for pod var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5 to disappear + Mar 7 02:28:39.819: INFO: Pod var-expansion-ecc79d03-9c6e-457f-87af-c514bf46d7a5 no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 + Mar 7 02:28:39.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "var-expansion-9549" for this suite. 03/07/23 02:28:39.823 + << End Captured GinkgoWriter Output +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:293 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:28:39.829 +Mar 7 02:28:39.829: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename daemonsets 03/07/23 02:28:39.829 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:39.843 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:39.845 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:293 +STEP: Creating a simple DaemonSet "daemon-set" 03/07/23 02:28:39.86 +STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 02:28:39.864 +Mar 7 02:28:39.870: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 02:28:39.870: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 02:28:40.878: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 02:28:40.878: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 02:28:41.876: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Mar 7 02:28:41.876: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 02:28:42.880: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Mar 7 02:28:42.880: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 02:28:43.877: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Mar 7 02:28:43.877: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 02:28:44.878: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Mar 7 02:28:44.878: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 02:28:45.877: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Mar 7 02:28:45.877: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 02:28:46.876: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Mar 7 02:28:46.876: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 03/07/23 02:28:46.878 +Mar 7 02:28:46.897: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Mar 7 02:28:46.897: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Wait for the failed daemon pod to be completely deleted. 03/07/23 02:28:46.897 +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" 03/07/23 02:28:46.905 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6064, will wait for the garbage collector to delete the pods 03/07/23 02:28:46.905 +Mar 7 02:28:46.963: INFO: Deleting DaemonSet.extensions daemon-set took: 4.94649ms +Mar 7 02:28:47.064: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.808993ms +Mar 7 02:28:49.668: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 02:28:49.668: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Mar 7 02:28:49.671: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"35023"},"items":null} + +Mar 7 02:28:49.674: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"35023"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +Mar 7 02:28:49.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6064" for this suite. 03/07/23 02:28:49.691 +{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","completed":22,"skipped":282,"failed":0} +------------------------------ +• [SLOW TEST] [9.866 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:293 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:28:39.829 + Mar 7 02:28:39.829: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename daemonsets 03/07/23 02:28:39.829 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:39.843 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:39.845 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 + [It] should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:293 + STEP: Creating a simple DaemonSet "daemon-set" 03/07/23 02:28:39.86 + STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 02:28:39.864 + Mar 7 02:28:39.870: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 02:28:39.870: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 02:28:40.878: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 02:28:40.878: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 02:28:41.876: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Mar 7 02:28:41.876: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 02:28:42.880: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Mar 7 02:28:42.880: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 02:28:43.877: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Mar 7 02:28:43.877: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 02:28:44.878: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Mar 7 02:28:44.878: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 02:28:45.877: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Mar 7 02:28:45.877: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 02:28:46.876: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Mar 7 02:28:46.876: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 03/07/23 02:28:46.878 + Mar 7 02:28:46.897: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Mar 7 02:28:46.897: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Wait for the failed daemon pod to be completely deleted. 03/07/23 02:28:46.897 + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 + STEP: Deleting DaemonSet "daemon-set" 03/07/23 02:28:46.905 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6064, will wait for the garbage collector to delete the pods 03/07/23 02:28:46.905 + Mar 7 02:28:46.963: INFO: Deleting DaemonSet.extensions daemon-set took: 4.94649ms + Mar 7 02:28:47.064: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.808993ms + Mar 7 02:28:49.668: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 02:28:49.668: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Mar 7 02:28:49.671: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"35023"},"items":null} + + Mar 7 02:28:49.674: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"35023"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 + Mar 7 02:28:49.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "daemonsets-6064" for this suite. 03/07/23 02:28:49.691 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:528 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:28:49.696 +Mar 7 02:28:49.696: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename svcaccounts 03/07/23 02:28:49.697 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:49.707 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:49.709 +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:528 +Mar 7 02:28:49.720: INFO: created pod +Mar 7 02:28:49.720: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6974" to be "Succeeded or Failed" +Mar 7 02:28:49.723: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240242ms +Mar 7 02:28:51.727: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006653226s +Mar 7 02:28:53.727: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006943235s +STEP: Saw pod success 03/07/23 02:28:53.727 +Mar 7 02:28:53.727: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Mar 7 02:29:23.731: INFO: polling logs +Mar 7 02:29:23.737: INFO: Pod logs: +I0307 02:28:50.493973 1 log.go:195] OK: Got token +I0307 02:28:50.493999 1 log.go:195] validating with in-cluster discovery +I0307 02:28:50.494239 1 log.go:195] OK: got issuer https://kubernetes.default.svc.cluster.local +I0307 02:28:50.494265 1 log.go:195] Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6974:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1678156730, NotBefore:1678156130, IssuedAt:1678156130, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6974", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"4c385c70-8dfb-4a3c-9054-20912d513ab9"}}} +I0307 02:28:50.501456 1 log.go:195] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local +I0307 02:28:50.505951 1 log.go:195] OK: Validated signature on JWT +I0307 02:28:50.506013 1 log.go:195] OK: Got valid claims from token! +I0307 02:28:50.506037 1 log.go:195] Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6974:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1678156730, NotBefore:1678156130, IssuedAt:1678156130, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6974", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"4c385c70-8dfb-4a3c-9054-20912d513ab9"}}} + +Mar 7 02:29:23.737: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +Mar 7 02:29:23.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-6974" for this suite. 03/07/23 02:29:23.745 +{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","completed":23,"skipped":298,"failed":0} +------------------------------ +• [SLOW TEST] [34.053 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:528 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:28:49.696 + Mar 7 02:28:49.696: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename svcaccounts 03/07/23 02:28:49.697 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:28:49.707 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:28:49.709 + [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:528 + Mar 7 02:28:49.720: INFO: created pod + Mar 7 02:28:49.720: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6974" to be "Succeeded or Failed" + Mar 7 02:28:49.723: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240242ms + Mar 7 02:28:51.727: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006653226s + Mar 7 02:28:53.727: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006943235s + STEP: Saw pod success 03/07/23 02:28:53.727 + Mar 7 02:28:53.727: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" + Mar 7 02:29:23.731: INFO: polling logs + Mar 7 02:29:23.737: INFO: Pod logs: + I0307 02:28:50.493973 1 log.go:195] OK: Got token + I0307 02:28:50.493999 1 log.go:195] validating with in-cluster discovery + I0307 02:28:50.494239 1 log.go:195] OK: got issuer https://kubernetes.default.svc.cluster.local + I0307 02:28:50.494265 1 log.go:195] Full, not-validated claims: + openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6974:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1678156730, NotBefore:1678156130, IssuedAt:1678156130, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6974", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"4c385c70-8dfb-4a3c-9054-20912d513ab9"}}} + I0307 02:28:50.501456 1 log.go:195] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local + I0307 02:28:50.505951 1 log.go:195] OK: Validated signature on JWT + I0307 02:28:50.506013 1 log.go:195] OK: Got valid claims from token! + I0307 02:28:50.506037 1 log.go:195] Full, validated claims: + &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6974:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1678156730, NotBefore:1678156130, IssuedAt:1678156130, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6974", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"4c385c70-8dfb-4a3c-9054-20912d513ab9"}}} + + Mar 7 02:29:23.737: INFO: completed pod + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 + Mar 7 02:29:23.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "svcaccounts-6974" for this suite. 03/07/23 02:29:23.745 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:29:23.753 +Mar 7 02:29:23.753: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename podtemplate 03/07/23 02:29:23.754 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:23.768 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:23.77 +[It] should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:187 +Mar 7 02:29:23.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-7429" for this suite. 03/07/23 02:29:23.794 +{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","completed":24,"skipped":343,"failed":0} +------------------------------ +• [0.045 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:29:23.753 + Mar 7 02:29:23.753: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename podtemplate 03/07/23 02:29:23.754 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:23.768 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:23.77 + [It] should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 + [AfterEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:187 + Mar 7 02:29:23.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "podtemplate-7429" for this suite. 03/07/23 02:29:23.794 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 +[BeforeEach] version v1 + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:29:23.799 +Mar 7 02:29:23.799: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename proxy 03/07/23 02:29:23.8 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:23.816 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:23.819 +[It] should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 +STEP: starting an echo server on multiple ports 03/07/23 02:29:23.83 +STEP: creating replication controller proxy-service-gdzs8 in namespace proxy-4438 03/07/23 02:29:23.831 +I0307 02:29:23.838844 22 runners.go:193] Created replication controller with name: proxy-service-gdzs8, namespace: proxy-4438, replica count: 1 +I0307 02:29:24.890155 22 runners.go:193] proxy-service-gdzs8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0307 02:29:25.891002 22 runners.go:193] proxy-service-gdzs8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 02:29:25.893: INFO: setup took 2.072886258s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts 03/07/23 02:29:25.893 +Mar 7 02:29:25.901: INFO: (0) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 7.998039ms) +Mar 7 02:29:25.902: INFO: (0) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 8.484583ms) +Mar 7 02:29:25.902: INFO: (0) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 8.525524ms) +Mar 7 02:29:25.902: INFO: (0) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 8.385479ms) +Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 12.935126ms) +Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 12.949903ms) +Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 12.921702ms) +Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 12.70324ms) +Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 12.984327ms) +Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 12.90708ms) +Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test<... (200; 13.107449ms) +Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 12.924746ms) +Mar 7 02:29:25.907: INFO: (0) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 13.067094ms) +Mar 7 02:29:25.907: INFO: (0) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 13.277073ms) +Mar 7 02:29:25.911: INFO: (1) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 3.99465ms) +Mar 7 02:29:25.911: INFO: (1) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.051253ms) +Mar 7 02:29:25.911: INFO: (1) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 4.358229ms) +Mar 7 02:29:25.912: INFO: (1) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 5.631064ms) +Mar 7 02:29:25.912: INFO: (1) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 5.745548ms) +Mar 7 02:29:25.913: INFO: (1) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 5.733239ms) +Mar 7 02:29:25.913: INFO: (1) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.902644ms) +Mar 7 02:29:25.913: INFO: (1) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.413546ms) +Mar 7 02:29:25.913: INFO: (1) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.511017ms) +Mar 7 02:29:25.914: INFO: (1) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 6.962417ms) +Mar 7 02:29:25.914: INFO: (1) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 6.822937ms) +Mar 7 02:29:25.914: INFO: (1) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test (200; 2.202316ms) +Mar 7 02:29:25.917: INFO: (2) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 2.631428ms) +Mar 7 02:29:25.918: INFO: (2) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 2.958763ms) +Mar 7 02:29:25.918: INFO: (2) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 3.731829ms) +Mar 7 02:29:25.919: INFO: (2) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.482962ms) +Mar 7 02:29:25.919: INFO: (2) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 4.733444ms) +Mar 7 02:29:25.920: INFO: (2) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 4.560124ms) +Mar 7 02:29:25.920: INFO: (2) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 5.128502ms) +Mar 7 02:29:25.920: INFO: (2) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 4.862281ms) +Mar 7 02:29:25.920: INFO: (2) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 4.958616ms) +Mar 7 02:29:25.921: INFO: (2) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 6.498805ms) +Mar 7 02:29:25.921: INFO: (2) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 6.240476ms) +Mar 7 02:29:25.922: INFO: (2) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 6.760615ms) +Mar 7 02:29:25.922: INFO: (2) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 6.926158ms) +Mar 7 02:29:25.922: INFO: (2) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 6.863935ms) +Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 10.739905ms) +Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 11.039716ms) +Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 11.064822ms) +Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 11.195206ms) +Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 11.032607ms) +Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 11.113099ms) +Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 11.18441ms) +Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 5.859522ms) +Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 5.994354ms) +Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 5.836747ms) +Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 5.889892ms) +Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 5.85434ms) +Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 5.929734ms) +Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 5.881151ms) +Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.14705ms) +Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.791471ms) +Mar 7 02:29:25.945: INFO: (4) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 7.758978ms) +Mar 7 02:29:25.946: INFO: (4) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 7.85881ms) +Mar 7 02:29:25.946: INFO: (4) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 8.001281ms) +Mar 7 02:29:25.946: INFO: (4) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 8.389695ms) +Mar 7 02:29:25.946: INFO: (4) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 8.705256ms) +Mar 7 02:29:25.949: INFO: (5) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 2.592093ms) +Mar 7 02:29:25.949: INFO: (5) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 2.983039ms) +Mar 7 02:29:25.951: INFO: (5) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 4.296198ms) +Mar 7 02:29:25.951: INFO: (5) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 5.524246ms) +Mar 7 02:29:25.953: INFO: (5) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 6.083369ms) +Mar 7 02:29:25.953: INFO: (5) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.306184ms) +Mar 7 02:29:25.953: INFO: (5) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.022785ms) +Mar 7 02:29:25.953: INFO: (5) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.142211ms) +Mar 7 02:29:25.953: INFO: (5) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 6.273426ms) +Mar 7 02:29:25.954: INFO: (5) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 6.680499ms) +Mar 7 02:29:25.954: INFO: (5) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 6.56849ms) +Mar 7 02:29:25.954: INFO: (5) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 7.138731ms) +Mar 7 02:29:25.954: INFO: (5) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 7.483364ms) +Mar 7 02:29:25.955: INFO: (5) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 7.927144ms) +Mar 7 02:29:25.957: INFO: (6) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test<... (200; 4.571208ms) +Mar 7 02:29:25.960: INFO: (6) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.70885ms) +Mar 7 02:29:25.960: INFO: (6) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 4.380602ms) +Mar 7 02:29:25.960: INFO: (6) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 4.708291ms) +Mar 7 02:29:25.960: INFO: (6) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 4.893719ms) +Mar 7 02:29:25.960: INFO: (6) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 4.675659ms) +Mar 7 02:29:25.961: INFO: (6) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 5.649335ms) +Mar 7 02:29:25.961: INFO: (6) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 5.742089ms) +Mar 7 02:29:25.961: INFO: (6) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 6.25064ms) +Mar 7 02:29:25.961: INFO: (6) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 5.865093ms) +Mar 7 02:29:25.962: INFO: (6) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 6.798625ms) +Mar 7 02:29:25.967: INFO: (7) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.264303ms) +Mar 7 02:29:25.968: INFO: (7) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 4.901159ms) +Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.708769ms) +Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 5.662319ms) +Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 5.901782ms) +Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 5.997922ms) +Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 5.805222ms) +Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 6.187157ms) +Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 6.020673ms) +Mar 7 02:29:25.970: INFO: (7) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 7.678404ms) +Mar 7 02:29:25.970: INFO: (7) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 7.83856ms) +Mar 7 02:29:25.970: INFO: (7) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test (200; 5.251649ms) +Mar 7 02:29:25.976: INFO: (8) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 5.214231ms) +Mar 7 02:29:25.976: INFO: (8) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 5.095571ms) +Mar 7 02:29:25.977: INFO: (8) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.68ms) +Mar 7 02:29:25.977: INFO: (8) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 5.381576ms) +Mar 7 02:29:25.977: INFO: (8) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 5.909207ms) +Mar 7 02:29:25.977: INFO: (8) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test (200; 4.270397ms) +Mar 7 02:29:25.984: INFO: (9) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.617575ms) +Mar 7 02:29:25.984: INFO: (9) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 4.722754ms) +Mar 7 02:29:25.985: INFO: (9) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.333565ms) +Mar 7 02:29:25.985: INFO: (9) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test<... (200; 3.633587ms) +Mar 7 02:29:25.994: INFO: (10) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 4.375734ms) +Mar 7 02:29:25.994: INFO: (10) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 4.469642ms) +Mar 7 02:29:25.994: INFO: (10) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 4.734113ms) +Mar 7 02:29:25.995: INFO: (10) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.282509ms) +Mar 7 02:29:25.996: INFO: (10) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.304552ms) +Mar 7 02:29:25.996: INFO: (10) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 6.868261ms) +Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 6.971366ms) +Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 7.046594ms) +Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 7.464785ms) +Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 7.253844ms) +Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 7.070568ms) +Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 7.601581ms) +Mar 7 02:29:26.007: INFO: (11) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 7.954016ms) +Mar 7 02:29:26.007: INFO: (11) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 7.63438ms) +Mar 7 02:29:26.007: INFO: (11) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 7.954493ms) +Mar 7 02:29:26.007: INFO: (11) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 8.272045ms) +Mar 7 02:29:26.011: INFO: (12) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.372915ms) +Mar 7 02:29:26.013: INFO: (12) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 5.999402ms) +Mar 7 02:29:26.013: INFO: (12) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test<... (200; 5.981458ms) +Mar 7 02:29:26.013: INFO: (12) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 6.154607ms) +Mar 7 02:29:26.013: INFO: (12) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 6.159429ms) +Mar 7 02:29:26.014: INFO: (12) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 6.166376ms) +Mar 7 02:29:26.014: INFO: (12) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 6.309153ms) +Mar 7 02:29:26.014: INFO: (12) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.763726ms) +Mar 7 02:29:26.015: INFO: (12) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 8.273433ms) +Mar 7 02:29:26.017: INFO: (12) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 9.236181ms) +Mar 7 02:29:26.017: INFO: (12) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 9.626587ms) +Mar 7 02:29:26.017: INFO: (12) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 10.214472ms) +Mar 7 02:29:26.018: INFO: (12) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 10.240047ms) +Mar 7 02:29:26.018: INFO: (12) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 10.664726ms) +Mar 7 02:29:26.024: INFO: (13) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 5.949998ms) +Mar 7 02:29:26.025: INFO: (13) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 6.937511ms) +Mar 7 02:29:26.026: INFO: (13) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 7.301987ms) +Mar 7 02:29:26.026: INFO: (13) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test<... (200; 15.039271ms) +Mar 7 02:29:26.033: INFO: (13) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 15.148969ms) +Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 15.235921ms) +Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 15.183756ms) +Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 15.303803ms) +Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 15.423839ms) +Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 15.946971ms) +Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 16.062925ms) +Mar 7 02:29:26.044: INFO: (13) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 25.336473ms) +Mar 7 02:29:26.049: INFO: (14) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.185474ms) +Mar 7 02:29:26.050: INFO: (14) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 6.15971ms) +Mar 7 02:29:26.050: INFO: (14) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 6.452399ms) +Mar 7 02:29:26.050: INFO: (14) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 6.472917ms) +Mar 7 02:29:26.050: INFO: (14) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.581206ms) +Mar 7 02:29:26.051: INFO: (14) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 7.036448ms) +Mar 7 02:29:26.051: INFO: (14) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test (200; 13.024021ms) +Mar 7 02:29:26.077: INFO: (15) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 14.10737ms) +Mar 7 02:29:26.078: INFO: (15) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 15.235998ms) +Mar 7 02:29:26.079: INFO: (15) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 15.408081ms) +Mar 7 02:29:26.079: INFO: (15) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 15.277555ms) +Mar 7 02:29:26.079: INFO: (15) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 15.674121ms) +Mar 7 02:29:26.079: INFO: (15) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 15.748096ms) +Mar 7 02:29:26.079: INFO: (15) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 16.139069ms) +Mar 7 02:29:26.080: INFO: (15) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 17.152016ms) +Mar 7 02:29:26.080: INFO: (15) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 17.182127ms) +Mar 7 02:29:26.091: INFO: (15) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 28.2138ms) +Mar 7 02:29:26.095: INFO: (16) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 3.796073ms) +Mar 7 02:29:26.095: INFO: (16) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 3.914508ms) +Mar 7 02:29:26.096: INFO: (16) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 3.990114ms) +Mar 7 02:29:26.100: INFO: (16) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 8.412449ms) +Mar 7 02:29:26.100: INFO: (16) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 8.494097ms) +Mar 7 02:29:26.100: INFO: (16) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 8.572614ms) +Mar 7 02:29:26.101: INFO: (16) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test (200; 8.937725ms) +Mar 7 02:29:26.101: INFO: (16) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 8.562816ms) +Mar 7 02:29:26.101: INFO: (16) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 9.23149ms) +Mar 7 02:29:26.102: INFO: (16) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 9.735418ms) +Mar 7 02:29:26.102: INFO: (16) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 9.882158ms) +Mar 7 02:29:26.102: INFO: (16) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 10.30345ms) +Mar 7 02:29:26.102: INFO: (16) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 10.148559ms) +Mar 7 02:29:26.103: INFO: (16) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 11.072794ms) +Mar 7 02:29:26.110: INFO: (17) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.822086ms) +Mar 7 02:29:26.110: INFO: (17) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 6.571182ms) +Mar 7 02:29:26.110: INFO: (17) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 6.662597ms) +Mar 7 02:29:26.110: INFO: (17) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 6.860038ms) +Mar 7 02:29:26.110: INFO: (17) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.897222ms) +Mar 7 02:29:26.111: INFO: (17) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 4.685792ms) +Mar 7 02:29:26.120: INFO: (18) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 5.028865ms) +Mar 7 02:29:26.120: INFO: (18) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 5.552022ms) +Mar 7 02:29:26.120: INFO: (18) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 5.259554ms) +Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 6.112451ms) +Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 6.977853ms) +Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 6.540264ms) +Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 6.949365ms) +Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 6.405713ms) +Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.513256ms) +Mar 7 02:29:26.123: INFO: (18) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 8.532676ms) +Mar 7 02:29:26.123: INFO: (18) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 7.915724ms) +Mar 7 02:29:26.123: INFO: (18) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 8.47938ms) +Mar 7 02:29:26.124: INFO: (18) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 8.227191ms) +Mar 7 02:29:26.128: INFO: (19) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 4.460578ms) +Mar 7 02:29:26.128: INFO: (19) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 4.535919ms) +Mar 7 02:29:26.128: INFO: (19) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.71383ms) +Mar 7 02:29:26.128: INFO: (19) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 4.652765ms) +Mar 7 02:29:26.128: INFO: (19) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 4.632076ms) +Mar 7 02:29:26.129: INFO: (19) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.335183ms) +Mar 7 02:29:26.129: INFO: (19) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 5.394959ms) +Mar 7 02:29:26.129: INFO: (19) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: > + [BeforeEach] version v1 + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:29:23.799 + Mar 7 02:29:23.799: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename proxy 03/07/23 02:29:23.8 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:23.816 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:23.819 + [It] should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 + STEP: starting an echo server on multiple ports 03/07/23 02:29:23.83 + STEP: creating replication controller proxy-service-gdzs8 in namespace proxy-4438 03/07/23 02:29:23.831 + I0307 02:29:23.838844 22 runners.go:193] Created replication controller with name: proxy-service-gdzs8, namespace: proxy-4438, replica count: 1 + I0307 02:29:24.890155 22 runners.go:193] proxy-service-gdzs8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + I0307 02:29:25.891002 22 runners.go:193] proxy-service-gdzs8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 02:29:25.893: INFO: setup took 2.072886258s, starting test cases + STEP: running 16 cases, 20 attempts per case, 320 total attempts 03/07/23 02:29:25.893 + Mar 7 02:29:25.901: INFO: (0) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 7.998039ms) + Mar 7 02:29:25.902: INFO: (0) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 8.484583ms) + Mar 7 02:29:25.902: INFO: (0) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 8.525524ms) + Mar 7 02:29:25.902: INFO: (0) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 8.385479ms) + Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 12.935126ms) + Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 12.949903ms) + Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 12.921702ms) + Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 12.70324ms) + Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 12.984327ms) + Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 12.90708ms) + Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test<... (200; 13.107449ms) + Mar 7 02:29:25.906: INFO: (0) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 12.924746ms) + Mar 7 02:29:25.907: INFO: (0) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 13.067094ms) + Mar 7 02:29:25.907: INFO: (0) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 13.277073ms) + Mar 7 02:29:25.911: INFO: (1) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 3.99465ms) + Mar 7 02:29:25.911: INFO: (1) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.051253ms) + Mar 7 02:29:25.911: INFO: (1) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 4.358229ms) + Mar 7 02:29:25.912: INFO: (1) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 5.631064ms) + Mar 7 02:29:25.912: INFO: (1) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 5.745548ms) + Mar 7 02:29:25.913: INFO: (1) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 5.733239ms) + Mar 7 02:29:25.913: INFO: (1) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.902644ms) + Mar 7 02:29:25.913: INFO: (1) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.413546ms) + Mar 7 02:29:25.913: INFO: (1) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.511017ms) + Mar 7 02:29:25.914: INFO: (1) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 6.962417ms) + Mar 7 02:29:25.914: INFO: (1) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 6.822937ms) + Mar 7 02:29:25.914: INFO: (1) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test (200; 2.202316ms) + Mar 7 02:29:25.917: INFO: (2) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 2.631428ms) + Mar 7 02:29:25.918: INFO: (2) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 2.958763ms) + Mar 7 02:29:25.918: INFO: (2) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 3.731829ms) + Mar 7 02:29:25.919: INFO: (2) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.482962ms) + Mar 7 02:29:25.919: INFO: (2) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 4.733444ms) + Mar 7 02:29:25.920: INFO: (2) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 4.560124ms) + Mar 7 02:29:25.920: INFO: (2) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 5.128502ms) + Mar 7 02:29:25.920: INFO: (2) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 4.862281ms) + Mar 7 02:29:25.920: INFO: (2) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 4.958616ms) + Mar 7 02:29:25.921: INFO: (2) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 6.498805ms) + Mar 7 02:29:25.921: INFO: (2) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 6.240476ms) + Mar 7 02:29:25.922: INFO: (2) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 6.760615ms) + Mar 7 02:29:25.922: INFO: (2) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 6.926158ms) + Mar 7 02:29:25.922: INFO: (2) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 6.863935ms) + Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 10.739905ms) + Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 11.039716ms) + Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 11.064822ms) + Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 11.195206ms) + Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 11.032607ms) + Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 11.113099ms) + Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 11.18441ms) + Mar 7 02:29:25.933: INFO: (3) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 5.859522ms) + Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 5.994354ms) + Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 5.836747ms) + Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 5.889892ms) + Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 5.85434ms) + Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 5.929734ms) + Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 5.881151ms) + Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.14705ms) + Mar 7 02:29:25.944: INFO: (4) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.791471ms) + Mar 7 02:29:25.945: INFO: (4) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 7.758978ms) + Mar 7 02:29:25.946: INFO: (4) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 7.85881ms) + Mar 7 02:29:25.946: INFO: (4) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 8.001281ms) + Mar 7 02:29:25.946: INFO: (4) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 8.389695ms) + Mar 7 02:29:25.946: INFO: (4) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 8.705256ms) + Mar 7 02:29:25.949: INFO: (5) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 2.592093ms) + Mar 7 02:29:25.949: INFO: (5) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 2.983039ms) + Mar 7 02:29:25.951: INFO: (5) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 4.296198ms) + Mar 7 02:29:25.951: INFO: (5) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 5.524246ms) + Mar 7 02:29:25.953: INFO: (5) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 6.083369ms) + Mar 7 02:29:25.953: INFO: (5) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.306184ms) + Mar 7 02:29:25.953: INFO: (5) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.022785ms) + Mar 7 02:29:25.953: INFO: (5) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.142211ms) + Mar 7 02:29:25.953: INFO: (5) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 6.273426ms) + Mar 7 02:29:25.954: INFO: (5) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 6.680499ms) + Mar 7 02:29:25.954: INFO: (5) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 6.56849ms) + Mar 7 02:29:25.954: INFO: (5) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 7.138731ms) + Mar 7 02:29:25.954: INFO: (5) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 7.483364ms) + Mar 7 02:29:25.955: INFO: (5) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 7.927144ms) + Mar 7 02:29:25.957: INFO: (6) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test<... (200; 4.571208ms) + Mar 7 02:29:25.960: INFO: (6) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.70885ms) + Mar 7 02:29:25.960: INFO: (6) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 4.380602ms) + Mar 7 02:29:25.960: INFO: (6) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 4.708291ms) + Mar 7 02:29:25.960: INFO: (6) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 4.893719ms) + Mar 7 02:29:25.960: INFO: (6) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 4.675659ms) + Mar 7 02:29:25.961: INFO: (6) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 5.649335ms) + Mar 7 02:29:25.961: INFO: (6) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 5.742089ms) + Mar 7 02:29:25.961: INFO: (6) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 6.25064ms) + Mar 7 02:29:25.961: INFO: (6) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 5.865093ms) + Mar 7 02:29:25.962: INFO: (6) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 6.798625ms) + Mar 7 02:29:25.967: INFO: (7) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.264303ms) + Mar 7 02:29:25.968: INFO: (7) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 4.901159ms) + Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.708769ms) + Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 5.662319ms) + Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 5.901782ms) + Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 5.997922ms) + Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 5.805222ms) + Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 6.187157ms) + Mar 7 02:29:25.969: INFO: (7) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 6.020673ms) + Mar 7 02:29:25.970: INFO: (7) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 7.678404ms) + Mar 7 02:29:25.970: INFO: (7) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 7.83856ms) + Mar 7 02:29:25.970: INFO: (7) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test (200; 5.251649ms) + Mar 7 02:29:25.976: INFO: (8) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 5.214231ms) + Mar 7 02:29:25.976: INFO: (8) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 5.095571ms) + Mar 7 02:29:25.977: INFO: (8) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.68ms) + Mar 7 02:29:25.977: INFO: (8) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 5.381576ms) + Mar 7 02:29:25.977: INFO: (8) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 5.909207ms) + Mar 7 02:29:25.977: INFO: (8) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test (200; 4.270397ms) + Mar 7 02:29:25.984: INFO: (9) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.617575ms) + Mar 7 02:29:25.984: INFO: (9) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 4.722754ms) + Mar 7 02:29:25.985: INFO: (9) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.333565ms) + Mar 7 02:29:25.985: INFO: (9) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test<... (200; 3.633587ms) + Mar 7 02:29:25.994: INFO: (10) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 4.375734ms) + Mar 7 02:29:25.994: INFO: (10) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 4.469642ms) + Mar 7 02:29:25.994: INFO: (10) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 4.734113ms) + Mar 7 02:29:25.995: INFO: (10) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.282509ms) + Mar 7 02:29:25.996: INFO: (10) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.304552ms) + Mar 7 02:29:25.996: INFO: (10) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 6.868261ms) + Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 6.971366ms) + Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 7.046594ms) + Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 7.464785ms) + Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 7.253844ms) + Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 7.070568ms) + Mar 7 02:29:26.006: INFO: (11) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 7.601581ms) + Mar 7 02:29:26.007: INFO: (11) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 7.954016ms) + Mar 7 02:29:26.007: INFO: (11) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 7.63438ms) + Mar 7 02:29:26.007: INFO: (11) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 7.954493ms) + Mar 7 02:29:26.007: INFO: (11) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 8.272045ms) + Mar 7 02:29:26.011: INFO: (12) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.372915ms) + Mar 7 02:29:26.013: INFO: (12) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 5.999402ms) + Mar 7 02:29:26.013: INFO: (12) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test<... (200; 5.981458ms) + Mar 7 02:29:26.013: INFO: (12) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 6.154607ms) + Mar 7 02:29:26.013: INFO: (12) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 6.159429ms) + Mar 7 02:29:26.014: INFO: (12) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 6.166376ms) + Mar 7 02:29:26.014: INFO: (12) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 6.309153ms) + Mar 7 02:29:26.014: INFO: (12) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.763726ms) + Mar 7 02:29:26.015: INFO: (12) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 8.273433ms) + Mar 7 02:29:26.017: INFO: (12) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 9.236181ms) + Mar 7 02:29:26.017: INFO: (12) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 9.626587ms) + Mar 7 02:29:26.017: INFO: (12) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 10.214472ms) + Mar 7 02:29:26.018: INFO: (12) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 10.240047ms) + Mar 7 02:29:26.018: INFO: (12) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 10.664726ms) + Mar 7 02:29:26.024: INFO: (13) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 5.949998ms) + Mar 7 02:29:26.025: INFO: (13) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 6.937511ms) + Mar 7 02:29:26.026: INFO: (13) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 7.301987ms) + Mar 7 02:29:26.026: INFO: (13) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test<... (200; 15.039271ms) + Mar 7 02:29:26.033: INFO: (13) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 15.148969ms) + Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 15.235921ms) + Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 15.183756ms) + Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 15.303803ms) + Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 15.423839ms) + Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 15.946971ms) + Mar 7 02:29:26.034: INFO: (13) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 16.062925ms) + Mar 7 02:29:26.044: INFO: (13) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 25.336473ms) + Mar 7 02:29:26.049: INFO: (14) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.185474ms) + Mar 7 02:29:26.050: INFO: (14) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 6.15971ms) + Mar 7 02:29:26.050: INFO: (14) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 6.452399ms) + Mar 7 02:29:26.050: INFO: (14) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 6.472917ms) + Mar 7 02:29:26.050: INFO: (14) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.581206ms) + Mar 7 02:29:26.051: INFO: (14) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 7.036448ms) + Mar 7 02:29:26.051: INFO: (14) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test (200; 13.024021ms) + Mar 7 02:29:26.077: INFO: (15) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 14.10737ms) + Mar 7 02:29:26.078: INFO: (15) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 15.235998ms) + Mar 7 02:29:26.079: INFO: (15) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 15.408081ms) + Mar 7 02:29:26.079: INFO: (15) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 15.277555ms) + Mar 7 02:29:26.079: INFO: (15) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 15.674121ms) + Mar 7 02:29:26.079: INFO: (15) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 15.748096ms) + Mar 7 02:29:26.079: INFO: (15) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 16.139069ms) + Mar 7 02:29:26.080: INFO: (15) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 17.152016ms) + Mar 7 02:29:26.080: INFO: (15) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 17.182127ms) + Mar 7 02:29:26.091: INFO: (15) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 28.2138ms) + Mar 7 02:29:26.095: INFO: (16) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 3.796073ms) + Mar 7 02:29:26.095: INFO: (16) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 3.914508ms) + Mar 7 02:29:26.096: INFO: (16) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 3.990114ms) + Mar 7 02:29:26.100: INFO: (16) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 8.412449ms) + Mar 7 02:29:26.100: INFO: (16) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 8.494097ms) + Mar 7 02:29:26.100: INFO: (16) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 8.572614ms) + Mar 7 02:29:26.101: INFO: (16) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: test (200; 8.937725ms) + Mar 7 02:29:26.101: INFO: (16) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 8.562816ms) + Mar 7 02:29:26.101: INFO: (16) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 9.23149ms) + Mar 7 02:29:26.102: INFO: (16) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 9.735418ms) + Mar 7 02:29:26.102: INFO: (16) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 9.882158ms) + Mar 7 02:29:26.102: INFO: (16) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 10.30345ms) + Mar 7 02:29:26.102: INFO: (16) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 10.148559ms) + Mar 7 02:29:26.103: INFO: (16) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 11.072794ms) + Mar 7 02:29:26.110: INFO: (17) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 6.822086ms) + Mar 7 02:29:26.110: INFO: (17) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 6.571182ms) + Mar 7 02:29:26.110: INFO: (17) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 6.662597ms) + Mar 7 02:29:26.110: INFO: (17) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 6.860038ms) + Mar 7 02:29:26.110: INFO: (17) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.897222ms) + Mar 7 02:29:26.111: INFO: (17) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: ... (200; 4.685792ms) + Mar 7 02:29:26.120: INFO: (18) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 5.028865ms) + Mar 7 02:29:26.120: INFO: (18) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname1/proxy/: foo (200; 5.552022ms) + Mar 7 02:29:26.120: INFO: (18) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 5.259554ms) + Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 6.112451ms) + Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname2/proxy/: bar (200; 6.977853ms) + Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/services/http:proxy-service-gdzs8:portname1/proxy/: foo (200; 6.540264ms) + Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname2/proxy/: tls qux (200; 6.949365ms) + Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 6.405713ms) + Mar 7 02:29:26.122: INFO: (18) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 6.513256ms) + Mar 7 02:29:26.123: INFO: (18) /api/v1/namespaces/proxy-4438/services/proxy-service-gdzs8:portname2/proxy/: bar (200; 8.532676ms) + Mar 7 02:29:26.123: INFO: (18) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:462/proxy/: tls qux (200; 7.915724ms) + Mar 7 02:29:26.123: INFO: (18) /api/v1/namespaces/proxy-4438/services/https:proxy-service-gdzs8:tlsportname1/proxy/: tls baz (200; 8.47938ms) + Mar 7 02:29:26.124: INFO: (18) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 8.227191ms) + Mar 7 02:29:26.128: INFO: (19) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 4.460578ms) + Mar 7 02:29:26.128: INFO: (19) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:1080/proxy/: test<... (200; 4.535919ms) + Mar 7 02:29:26.128: INFO: (19) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:160/proxy/: foo (200; 4.71383ms) + Mar 7 02:29:26.128: INFO: (19) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc/proxy/: test (200; 4.652765ms) + Mar 7 02:29:26.128: INFO: (19) /api/v1/namespaces/proxy-4438/pods/proxy-service-gdzs8-fklcc:162/proxy/: bar (200; 4.632076ms) + Mar 7 02:29:26.129: INFO: (19) /api/v1/namespaces/proxy-4438/pods/http:proxy-service-gdzs8-fklcc:1080/proxy/: ... (200; 5.335183ms) + Mar 7 02:29:26.129: INFO: (19) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:460/proxy/: tls baz (200; 5.394959ms) + Mar 7 02:29:26.129: INFO: (19) /api/v1/namespaces/proxy-4438/pods/https:proxy-service-gdzs8-fklcc:443/proxy/: >> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename cronjob 03/07/23 02:29:29.325 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:29.339 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:29.342 +[It] should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 +STEP: Creating a cronjob 03/07/23 02:29:29.343 +STEP: creating 03/07/23 02:29:29.344 +STEP: getting 03/07/23 02:29:29.35 +STEP: listing 03/07/23 02:29:29.352 +STEP: watching 03/07/23 02:29:29.354 +Mar 7 02:29:29.354: INFO: starting watch +STEP: cluster-wide listing 03/07/23 02:29:29.355 +STEP: cluster-wide watching 03/07/23 02:29:29.357 +Mar 7 02:29:29.357: INFO: starting watch +STEP: patching 03/07/23 02:29:29.358 +STEP: updating 03/07/23 02:29:29.363 +Mar 7 02:29:29.383: INFO: waiting for watch events with expected annotations +Mar 7 02:29:29.383: INFO: saw patched and updated annotations +STEP: patching /status 03/07/23 02:29:29.383 +STEP: updating /status 03/07/23 02:29:29.39 +STEP: get /status 03/07/23 02:29:29.396 +STEP: deleting 03/07/23 02:29:29.408 +STEP: deleting a collection 03/07/23 02:29:29.423 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 +Mar 7 02:29:29.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-3789" for this suite. 03/07/23 02:29:29.435 +{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","completed":26,"skipped":388,"failed":0} +------------------------------ +• [0.117 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:29:29.324 + Mar 7 02:29:29.324: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename cronjob 03/07/23 02:29:29.325 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:29.339 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:29.342 + [It] should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 + STEP: Creating a cronjob 03/07/23 02:29:29.343 + STEP: creating 03/07/23 02:29:29.344 + STEP: getting 03/07/23 02:29:29.35 + STEP: listing 03/07/23 02:29:29.352 + STEP: watching 03/07/23 02:29:29.354 + Mar 7 02:29:29.354: INFO: starting watch + STEP: cluster-wide listing 03/07/23 02:29:29.355 + STEP: cluster-wide watching 03/07/23 02:29:29.357 + Mar 7 02:29:29.357: INFO: starting watch + STEP: patching 03/07/23 02:29:29.358 + STEP: updating 03/07/23 02:29:29.363 + Mar 7 02:29:29.383: INFO: waiting for watch events with expected annotations + Mar 7 02:29:29.383: INFO: saw patched and updated annotations + STEP: patching /status 03/07/23 02:29:29.383 + STEP: updating /status 03/07/23 02:29:29.39 + STEP: get /status 03/07/23 02:29:29.396 + STEP: deleting 03/07/23 02:29:29.408 + STEP: deleting a collection 03/07/23 02:29:29.423 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 + Mar 7 02:29:29.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "cronjob-3789" for this suite. 03/07/23 02:29:29.435 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:29:29.442 +Mar 7 02:29:29.443: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubelet-test 03/07/23 02:29:29.443 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:29.464 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:29.466 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 +[It] should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 +[AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 +Mar 7 02:29:29.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-68" for this suite. 03/07/23 02:29:29.49 +{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","completed":27,"skipped":389,"failed":0} +------------------------------ +• [0.056 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:82 + should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:29:29.442 + Mar 7 02:29:29.443: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubelet-test 03/07/23 02:29:29.443 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:29.464 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:29.466 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 + [It] should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 + [AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 + Mar 7 02:29:29.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubelet-test-68" for this suite. 03/07/23 02:29:29.49 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:165 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:29:29.497 +Mar 7 02:29:29.498: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-probe 03/07/23 02:29:29.498 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:29.519 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:29.521 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:165 +STEP: Creating pod liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d in namespace container-probe-4426 03/07/23 02:29:29.522 +Mar 7 02:29:29.531: INFO: Waiting up to 5m0s for pod "liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d" in namespace "container-probe-4426" to be "not pending" +Mar 7 02:29:29.550: INFO: Pod "liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.616499ms +Mar 7 02:29:31.590: INFO: Pod "liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d": Phase="Running", Reason="", readiness=true. Elapsed: 2.058632199s +Mar 7 02:29:31.590: INFO: Pod "liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d" satisfied condition "not pending" +Mar 7 02:29:31.590: INFO: Started pod liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d in namespace container-probe-4426 +STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 02:29:31.59 +Mar 7 02:29:31.593: INFO: Initial restart count of pod liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d is 0 +Mar 7 02:29:51.659: INFO: Restart count of pod container-probe-4426/liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d is now 1 (20.065943567s elapsed) +STEP: deleting the pod 03/07/23 02:29:51.659 +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +Mar 7 02:29:51.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-4426" for this suite. 03/07/23 02:29:51.679 +{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","completed":28,"skipped":398,"failed":0} +------------------------------ +• [SLOW TEST] [22.186 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:165 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:29:29.497 + Mar 7 02:29:29.498: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-probe 03/07/23 02:29:29.498 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:29.519 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:29.521 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 + [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:165 + STEP: Creating pod liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d in namespace container-probe-4426 03/07/23 02:29:29.522 + Mar 7 02:29:29.531: INFO: Waiting up to 5m0s for pod "liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d" in namespace "container-probe-4426" to be "not pending" + Mar 7 02:29:29.550: INFO: Pod "liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.616499ms + Mar 7 02:29:31.590: INFO: Pod "liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d": Phase="Running", Reason="", readiness=true. Elapsed: 2.058632199s + Mar 7 02:29:31.590: INFO: Pod "liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d" satisfied condition "not pending" + Mar 7 02:29:31.590: INFO: Started pod liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d in namespace container-probe-4426 + STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 02:29:31.59 + Mar 7 02:29:31.593: INFO: Initial restart count of pod liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d is 0 + Mar 7 02:29:51.659: INFO: Restart count of pod container-probe-4426/liveness-08bbf7d2-dd6e-483c-aae4-77b01e62655d is now 1 (20.065943567s elapsed) + STEP: deleting the pod 03/07/23 02:29:51.659 + [AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 + Mar 7 02:29:51.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-probe-4426" for this suite. 03/07/23 02:29:51.679 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1443 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:29:51.684 +Mar 7 02:29:51.684: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 02:29:51.685 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:51.702 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:51.703 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1443 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-6903 03/07/23 02:29:51.705 +STEP: changing the ExternalName service to type=NodePort 03/07/23 02:29:51.711 +STEP: creating replication controller externalname-service in namespace services-6903 03/07/23 02:29:51.731 +I0307 02:29:51.746922 22 runners.go:193] Created replication controller with name: externalname-service, namespace: services-6903, replica count: 2 +I0307 02:29:54.798048 22 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 02:29:54.798: INFO: Creating new exec pod +Mar 7 02:29:54.801: INFO: Waiting up to 5m0s for pod "execpodzpblw" in namespace "services-6903" to be "running" +Mar 7 02:29:54.805: INFO: Pod "execpodzpblw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.228154ms +Mar 7 02:29:56.808: INFO: Pod "execpodzpblw": Phase="Running", Reason="", readiness=true. Elapsed: 2.006770321s +Mar 7 02:29:56.808: INFO: Pod "execpodzpblw" satisfied condition "running" +Mar 7 02:29:57.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Mar 7 02:29:58.004: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Mar 7 02:29:58.004: INFO: stdout: "" +Mar 7 02:29:59.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Mar 7 02:29:59.187: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Mar 7 02:29:59.188: INFO: stdout: "" +Mar 7 02:30:00.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Mar 7 02:30:00.186: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Mar 7 02:30:00.186: INFO: stdout: "externalname-service-9sggp" +Mar 7 02:30:00.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.181.213 80' +Mar 7 02:30:00.360: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.181.213 80\nConnection to 10.96.181.213 80 port [tcp/http] succeeded!\n" +Mar 7 02:30:00.360: INFO: stdout: "externalname-service-wp2tl" +Mar 7 02:30:00.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.102 31079' +Mar 7 02:30:00.546: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.102 31079\nConnection to 192.168.1.102 31079 port [tcp/*] succeeded!\n" +Mar 7 02:30:00.546: INFO: stdout: "externalname-service-9sggp" +Mar 7 02:30:00.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.101 31079' +Mar 7 02:30:00.746: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.101 31079\nConnection to 192.168.1.101 31079 port [tcp/*] succeeded!\n" +Mar 7 02:30:00.746: INFO: stdout: "externalname-service-9sggp" +Mar 7 02:30:00.746: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 02:30:00.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6903" for this suite. 03/07/23 02:30:00.777 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","completed":29,"skipped":401,"failed":0} +------------------------------ +• [SLOW TEST] [9.111 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1443 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:29:51.684 + Mar 7 02:29:51.684: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 02:29:51.685 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:29:51.702 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:29:51.703 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1443 + STEP: creating a service externalname-service with the type=ExternalName in namespace services-6903 03/07/23 02:29:51.705 + STEP: changing the ExternalName service to type=NodePort 03/07/23 02:29:51.711 + STEP: creating replication controller externalname-service in namespace services-6903 03/07/23 02:29:51.731 + I0307 02:29:51.746922 22 runners.go:193] Created replication controller with name: externalname-service, namespace: services-6903, replica count: 2 + I0307 02:29:54.798048 22 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 02:29:54.798: INFO: Creating new exec pod + Mar 7 02:29:54.801: INFO: Waiting up to 5m0s for pod "execpodzpblw" in namespace "services-6903" to be "running" + Mar 7 02:29:54.805: INFO: Pod "execpodzpblw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.228154ms + Mar 7 02:29:56.808: INFO: Pod "execpodzpblw": Phase="Running", Reason="", readiness=true. Elapsed: 2.006770321s + Mar 7 02:29:56.808: INFO: Pod "execpodzpblw" satisfied condition "running" + Mar 7 02:29:57.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' + Mar 7 02:29:58.004: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Mar 7 02:29:58.004: INFO: stdout: "" + Mar 7 02:29:59.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' + Mar 7 02:29:59.187: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Mar 7 02:29:59.188: INFO: stdout: "" + Mar 7 02:30:00.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' + Mar 7 02:30:00.186: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Mar 7 02:30:00.186: INFO: stdout: "externalname-service-9sggp" + Mar 7 02:30:00.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.181.213 80' + Mar 7 02:30:00.360: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.181.213 80\nConnection to 10.96.181.213 80 port [tcp/http] succeeded!\n" + Mar 7 02:30:00.360: INFO: stdout: "externalname-service-wp2tl" + Mar 7 02:30:00.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.102 31079' + Mar 7 02:30:00.546: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.102 31079\nConnection to 192.168.1.102 31079 port [tcp/*] succeeded!\n" + Mar 7 02:30:00.546: INFO: stdout: "externalname-service-9sggp" + Mar 7 02:30:00.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6903 exec execpodzpblw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.101 31079' + Mar 7 02:30:00.746: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.101 31079\nConnection to 192.168.1.101 31079 port [tcp/*] succeeded!\n" + Mar 7 02:30:00.746: INFO: stdout: "externalname-service-9sggp" + Mar 7 02:30:00.746: INFO: Cleaning up the ExternalName to NodePort test service + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 02:30:00.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-6903" for this suite. 03/07/23 02:30:00.777 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:422 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:30:00.796 +Mar 7 02:30:00.796: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 02:30:00.797 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:30:00.813 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:30:00.82 +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:422 +STEP: Creating configMap with name configmap-test-volume-27650395-9b36-4a63-b140-497411ac4d10 03/07/23 02:30:00.822 +STEP: Creating a pod to test consume configMaps 03/07/23 02:30:00.832 +Mar 7 02:30:00.839: INFO: Waiting up to 5m0s for pod "pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca" in namespace "configmap-6452" to be "Succeeded or Failed" +Mar 7 02:30:00.849: INFO: Pod "pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.384117ms +Mar 7 02:30:02.853: INFO: Pod "pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013816785s +Mar 7 02:30:04.853: INFO: Pod "pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013764472s +STEP: Saw pod success 03/07/23 02:30:04.853 +Mar 7 02:30:04.853: INFO: Pod "pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca" satisfied condition "Succeeded or Failed" +Mar 7 02:30:04.855: INFO: Trying to get logs from node node-2 pod pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca container configmap-volume-test: +STEP: delete the pod 03/07/23 02:30:04.859 +Mar 7 02:30:04.866: INFO: Waiting for pod pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca to disappear +Mar 7 02:30:04.868: INFO: Pod pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 02:30:04.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6452" for this suite. 03/07/23 02:30:04.872 +{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","completed":30,"skipped":426,"failed":0} +------------------------------ +• [4.080 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:422 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:30:00.796 + Mar 7 02:30:00.796: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 02:30:00.797 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:30:00.813 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:30:00.82 + [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:422 + STEP: Creating configMap with name configmap-test-volume-27650395-9b36-4a63-b140-497411ac4d10 03/07/23 02:30:00.822 + STEP: Creating a pod to test consume configMaps 03/07/23 02:30:00.832 + Mar 7 02:30:00.839: INFO: Waiting up to 5m0s for pod "pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca" in namespace "configmap-6452" to be "Succeeded or Failed" + Mar 7 02:30:00.849: INFO: Pod "pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.384117ms + Mar 7 02:30:02.853: INFO: Pod "pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013816785s + Mar 7 02:30:04.853: INFO: Pod "pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013764472s + STEP: Saw pod success 03/07/23 02:30:04.853 + Mar 7 02:30:04.853: INFO: Pod "pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca" satisfied condition "Succeeded or Failed" + Mar 7 02:30:04.855: INFO: Trying to get logs from node node-2 pod pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca container configmap-volume-test: + STEP: delete the pod 03/07/23 02:30:04.859 + Mar 7 02:30:04.866: INFO: Waiting for pod pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca to disappear + Mar 7 02:30:04.868: INFO: Pod pod-configmaps-7042b1dd-4a5a-4970-bf4b-70d95bbfb8ca no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 02:30:04.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-6452" for this suite. 03/07/23 02:30:04.872 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Containers + should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:72 +[BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:30:04.877 +Mar 7 02:30:04.877: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename containers 03/07/23 02:30:04.877 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:30:04.89 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:30:04.892 +[It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:72 +STEP: Creating a pod to test override command 03/07/23 02:30:04.893 +Mar 7 02:30:04.899: INFO: Waiting up to 5m0s for pod "client-containers-35841911-4552-40a1-9fca-92fdfaebf43c" in namespace "containers-6059" to be "Succeeded or Failed" +Mar 7 02:30:04.904: INFO: Pod "client-containers-35841911-4552-40a1-9fca-92fdfaebf43c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.735053ms +Mar 7 02:30:06.909: INFO: Pod "client-containers-35841911-4552-40a1-9fca-92fdfaebf43c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009489459s +Mar 7 02:30:08.908: INFO: Pod "client-containers-35841911-4552-40a1-9fca-92fdfaebf43c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008663702s +STEP: Saw pod success 03/07/23 02:30:08.908 +Mar 7 02:30:08.908: INFO: Pod "client-containers-35841911-4552-40a1-9fca-92fdfaebf43c" satisfied condition "Succeeded or Failed" +Mar 7 02:30:08.910: INFO: Trying to get logs from node node-2 pod client-containers-35841911-4552-40a1-9fca-92fdfaebf43c container agnhost-container: +STEP: delete the pod 03/07/23 02:30:08.916 +Mar 7 02:30:08.924: INFO: Waiting for pod client-containers-35841911-4552-40a1-9fca-92fdfaebf43c to disappear +Mar 7 02:30:08.926: INFO: Pod client-containers-35841911-4552-40a1-9fca-92fdfaebf43c no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:187 +Mar 7 02:30:08.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-6059" for this suite. 03/07/23 02:30:08.929 +{"msg":"PASSED [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]","completed":31,"skipped":429,"failed":0} +------------------------------ +• [4.057 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:72 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:30:04.877 + Mar 7 02:30:04.877: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename containers 03/07/23 02:30:04.877 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:30:04.89 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:30:04.892 + [It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:72 + STEP: Creating a pod to test override command 03/07/23 02:30:04.893 + Mar 7 02:30:04.899: INFO: Waiting up to 5m0s for pod "client-containers-35841911-4552-40a1-9fca-92fdfaebf43c" in namespace "containers-6059" to be "Succeeded or Failed" + Mar 7 02:30:04.904: INFO: Pod "client-containers-35841911-4552-40a1-9fca-92fdfaebf43c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.735053ms + Mar 7 02:30:06.909: INFO: Pod "client-containers-35841911-4552-40a1-9fca-92fdfaebf43c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009489459s + Mar 7 02:30:08.908: INFO: Pod "client-containers-35841911-4552-40a1-9fca-92fdfaebf43c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008663702s + STEP: Saw pod success 03/07/23 02:30:08.908 + Mar 7 02:30:08.908: INFO: Pod "client-containers-35841911-4552-40a1-9fca-92fdfaebf43c" satisfied condition "Succeeded or Failed" + Mar 7 02:30:08.910: INFO: Trying to get logs from node node-2 pod client-containers-35841911-4552-40a1-9fca-92fdfaebf43c container agnhost-container: + STEP: delete the pod 03/07/23 02:30:08.916 + Mar 7 02:30:08.924: INFO: Waiting for pod client-containers-35841911-4552-40a1-9fca-92fdfaebf43c to disappear + Mar 7 02:30:08.926: INFO: Pod client-containers-35841911-4552-40a1-9fca-92fdfaebf43c no longer exists + [AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:187 + Mar 7 02:30:08.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "containers-6059" for this suite. 03/07/23 02:30:08.929 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:30:08.934 +Mar 7 02:30:08.934: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename cronjob 03/07/23 02:30:08.935 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:30:08.947 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:30:08.949 +[It] should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 +STEP: Creating a cronjob 03/07/23 02:30:08.951 +STEP: Ensuring more than one job is running at a time 03/07/23 02:30:08.955 +STEP: Ensuring at least two running jobs exists by listing jobs explicitly 03/07/23 02:32:00.959 +STEP: Removing cronjob 03/07/23 02:32:00.962 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 +Mar 7 02:32:00.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-9381" for this suite. 03/07/23 02:32:00.969 +{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","completed":32,"skipped":441,"failed":0} +------------------------------ +• [SLOW TEST] [112.047 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:30:08.934 + Mar 7 02:30:08.934: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename cronjob 03/07/23 02:30:08.935 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:30:08.947 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:30:08.949 + [It] should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 + STEP: Creating a cronjob 03/07/23 02:30:08.951 + STEP: Ensuring more than one job is running at a time 03/07/23 02:30:08.955 + STEP: Ensuring at least two running jobs exists by listing jobs explicitly 03/07/23 02:32:00.959 + STEP: Removing cronjob 03/07/23 02:32:00.962 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 + Mar 7 02:32:00.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "cronjob-9381" for this suite. 03/07/23 02:32:00.969 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:733 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:32:00.981 +Mar 7 02:32:00.981: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sched-preemption 03/07/23 02:32:00.982 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:32:00.994 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:32:01 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 +Mar 7 02:32:01.014: INFO: Waiting up to 1m0s for all nodes to be ready +Mar 7 02:33:01.054: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:33:01.057 +Mar 7 02:33:01.057: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sched-preemption-path 03/07/23 02:33:01.058 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:01.069 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:01.071 +[BeforeEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:690 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:733 +Mar 7 02:33:01.082: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. +Mar 7 02:33:01.085: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + test/e2e/framework/framework.go:187 +Mar 7 02:33:01.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-9259" for this suite. 03/07/23 02:33:01.104 +[AfterEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:706 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 +Mar 7 02:33:01.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-2857" for this suite. 03/07/23 02:33:01.119 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","completed":33,"skipped":443,"failed":0} +------------------------------ +• [SLOW TEST] [60.173 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + PriorityClass endpoints + test/e2e/scheduling/preemption.go:683 + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:733 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:32:00.981 + Mar 7 02:32:00.981: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sched-preemption 03/07/23 02:32:00.982 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:32:00.994 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:32:01 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 + Mar 7 02:32:01.014: INFO: Waiting up to 1m0s for all nodes to be ready + Mar 7 02:33:01.054: INFO: Waiting for terminating namespaces to be deleted... + [BeforeEach] PriorityClass endpoints + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:33:01.057 + Mar 7 02:33:01.057: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sched-preemption-path 03/07/23 02:33:01.058 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:01.069 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:01.071 + [BeforeEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:690 + [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:733 + Mar 7 02:33:01.082: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. + Mar 7 02:33:01.085: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. + [AfterEach] PriorityClass endpoints + test/e2e/framework/framework.go:187 + Mar 7 02:33:01.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sched-preemption-path-9259" for this suite. 03/07/23 02:33:01.104 + [AfterEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:706 + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 + Mar 7 02:33:01.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sched-preemption-2857" for this suite. 03/07/23 02:33:01.119 + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:146 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:33:01.155 +Mar 7 02:33:01.155: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 02:33:01.156 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:01.169 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:01.171 +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:146 +STEP: Creating a pod to test emptydir 0777 on tmpfs 03/07/23 02:33:01.172 +Mar 7 02:33:01.178: INFO: Waiting up to 5m0s for pod "pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8" in namespace "emptydir-1326" to be "Succeeded or Failed" +Mar 7 02:33:01.180: INFO: Pod "pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348335ms +Mar 7 02:33:03.194: INFO: Pod "pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016352785s +Mar 7 02:33:05.185: INFO: Pod "pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006541624s +STEP: Saw pod success 03/07/23 02:33:05.185 +Mar 7 02:33:05.185: INFO: Pod "pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8" satisfied condition "Succeeded or Failed" +Mar 7 02:33:05.187: INFO: Trying to get logs from node node-2 pod pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8 container test-container: +STEP: delete the pod 03/07/23 02:33:05.199 +Mar 7 02:33:05.206: INFO: Waiting for pod pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8 to disappear +Mar 7 02:33:05.210: INFO: Pod pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 02:33:05.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1326" for this suite. 03/07/23 02:33:05.213 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":34,"skipped":452,"failed":0} +------------------------------ +• [4.062 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:146 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:33:01.155 + Mar 7 02:33:01.155: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 02:33:01.156 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:01.169 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:01.171 + [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:146 + STEP: Creating a pod to test emptydir 0777 on tmpfs 03/07/23 02:33:01.172 + Mar 7 02:33:01.178: INFO: Waiting up to 5m0s for pod "pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8" in namespace "emptydir-1326" to be "Succeeded or Failed" + Mar 7 02:33:01.180: INFO: Pod "pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348335ms + Mar 7 02:33:03.194: INFO: Pod "pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016352785s + Mar 7 02:33:05.185: INFO: Pod "pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006541624s + STEP: Saw pod success 03/07/23 02:33:05.185 + Mar 7 02:33:05.185: INFO: Pod "pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8" satisfied condition "Succeeded or Failed" + Mar 7 02:33:05.187: INFO: Trying to get logs from node node-2 pod pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8 container test-container: + STEP: delete the pod 03/07/23 02:33:05.199 + Mar 7 02:33:05.206: INFO: Waiting for pod pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8 to disappear + Mar 7 02:33:05.210: INFO: Pod pod-cff08303-b926-41bf-b6a1-bc2df69c2fd8 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 02:33:05.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-1326" for this suite. 03/07/23 02:33:05.213 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:90 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:33:05.219 +Mar 7 02:33:05.219: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 02:33:05.219 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:05.231 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:05.233 +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:90 +STEP: Counting existing ResourceQuota 03/07/23 02:33:05.235 +STEP: Creating a ResourceQuota 03/07/23 02:33:10.237 +STEP: Ensuring resource quota status is calculated 03/07/23 02:33:10.242 +STEP: Creating a Service 03/07/23 02:33:12.246 +STEP: Creating a NodePort Service 03/07/23 02:33:12.264 +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota 03/07/23 02:33:12.294 +STEP: Ensuring resource quota status captures service creation 03/07/23 02:33:12.335 +STEP: Deleting Services 03/07/23 02:33:14.339 +STEP: Ensuring resource quota status released usage 03/07/23 02:33:14.399 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 02:33:16.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4315" for this suite. 03/07/23 02:33:16.406 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","completed":35,"skipped":461,"failed":0} +------------------------------ +• [SLOW TEST] [11.204 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:90 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:33:05.219 + Mar 7 02:33:05.219: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 02:33:05.219 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:05.231 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:05.233 + [It] should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:90 + STEP: Counting existing ResourceQuota 03/07/23 02:33:05.235 + STEP: Creating a ResourceQuota 03/07/23 02:33:10.237 + STEP: Ensuring resource quota status is calculated 03/07/23 02:33:10.242 + STEP: Creating a Service 03/07/23 02:33:12.246 + STEP: Creating a NodePort Service 03/07/23 02:33:12.264 + STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota 03/07/23 02:33:12.294 + STEP: Ensuring resource quota status captures service creation 03/07/23 02:33:12.335 + STEP: Deleting Services 03/07/23 02:33:14.339 + STEP: Ensuring resource quota status released usage 03/07/23 02:33:14.399 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 02:33:16.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-4315" for this suite. 03/07/23 02:33:16.406 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:380 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:33:16.423 +Mar 7 02:33:16.423: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 02:33:16.424 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:16.436 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:16.438 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 02:33:16.449 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 02:33:16.838 +STEP: Deploying the webhook pod 03/07/23 02:33:16.844 +STEP: Wait for the deployment to be ready 03/07/23 02:33:16.852 +Mar 7 02:33:16.865: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 02:33:18.873 +STEP: Verifying the service has paired with the endpoint 03/07/23 02:33:18.887 +Mar 7 02:33:19.888: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:380 +STEP: Setting timeout (1s) shorter than webhook latency (5s) 03/07/23 02:33:19.891 +STEP: Registering slow webhook via the AdmissionRegistration API 03/07/23 02:33:19.891 +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) 03/07/23 02:33:19.902 +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore 03/07/23 02:33:20.909 +STEP: Registering slow webhook via the AdmissionRegistration API 03/07/23 02:33:20.909 +STEP: Having no error when timeout is longer than webhook latency 03/07/23 02:33:21.928 +STEP: Registering slow webhook via the AdmissionRegistration API 03/07/23 02:33:21.928 +STEP: Having no error when timeout is empty (defaulted to 10s in v1) 03/07/23 02:33:26.952 +STEP: Registering slow webhook via the AdmissionRegistration API 03/07/23 02:33:26.952 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 02:33:32.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2133" for this suite. 03/07/23 02:33:32.026 +STEP: Destroying namespace "webhook-2133-markers" for this suite. 03/07/23 02:33:32.038 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","completed":36,"skipped":489,"failed":0} +------------------------------ +• [SLOW TEST] [15.681 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:380 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:33:16.423 + Mar 7 02:33:16.423: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 02:33:16.424 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:16.436 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:16.438 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 02:33:16.449 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 02:33:16.838 + STEP: Deploying the webhook pod 03/07/23 02:33:16.844 + STEP: Wait for the deployment to be ready 03/07/23 02:33:16.852 + Mar 7 02:33:16.865: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 02:33:18.873 + STEP: Verifying the service has paired with the endpoint 03/07/23 02:33:18.887 + Mar 7 02:33:19.888: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:380 + STEP: Setting timeout (1s) shorter than webhook latency (5s) 03/07/23 02:33:19.891 + STEP: Registering slow webhook via the AdmissionRegistration API 03/07/23 02:33:19.891 + STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) 03/07/23 02:33:19.902 + STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore 03/07/23 02:33:20.909 + STEP: Registering slow webhook via the AdmissionRegistration API 03/07/23 02:33:20.909 + STEP: Having no error when timeout is longer than webhook latency 03/07/23 02:33:21.928 + STEP: Registering slow webhook via the AdmissionRegistration API 03/07/23 02:33:21.928 + STEP: Having no error when timeout is empty (defaulted to 10s in v1) 03/07/23 02:33:26.952 + STEP: Registering slow webhook via the AdmissionRegistration API 03/07/23 02:33:26.952 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 02:33:32.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-2133" for this suite. 03/07/23 02:33:32.026 + STEP: Destroying namespace "webhook-2133-markers" for this suite. 03/07/23 02:33:32.038 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:226 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:33:32.105 +Mar 7 02:33:32.105: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 02:33:32.107 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:32.129 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:32.134 +[It] pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:226 +STEP: Creating Pod 03/07/23 02:33:32.138 +Mar 7 02:33:32.148: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949" in namespace "emptydir-5053" to be "running" +Mar 7 02:33:32.160: INFO: Pod "pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949": Phase="Pending", Reason="", readiness=false. Elapsed: 11.615632ms +Mar 7 02:33:34.164: INFO: Pod "pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949": Phase="Running", Reason="", readiness=false. Elapsed: 2.01573325s +Mar 7 02:33:34.164: INFO: Pod "pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949" satisfied condition "running" +STEP: Reading file content from the nginx-container 03/07/23 02:33:34.164 +Mar 7 02:33:34.164: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5053 PodName:pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 02:33:34.164: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 02:33:34.164: INFO: ExecWithOptions: Clientset creation +Mar 7 02:33:34.164: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/emptydir-5053/pods/pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) +Mar 7 02:33:34.224: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 02:33:34.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5053" for this suite. 03/07/23 02:33:34.227 +{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","completed":37,"skipped":492,"failed":0} +------------------------------ +• [2.128 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:226 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:33:32.105 + Mar 7 02:33:32.105: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 02:33:32.107 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:32.129 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:32.134 + [It] pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:226 + STEP: Creating Pod 03/07/23 02:33:32.138 + Mar 7 02:33:32.148: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949" in namespace "emptydir-5053" to be "running" + Mar 7 02:33:32.160: INFO: Pod "pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949": Phase="Pending", Reason="", readiness=false. Elapsed: 11.615632ms + Mar 7 02:33:34.164: INFO: Pod "pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949": Phase="Running", Reason="", readiness=false. Elapsed: 2.01573325s + Mar 7 02:33:34.164: INFO: Pod "pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949" satisfied condition "running" + STEP: Reading file content from the nginx-container 03/07/23 02:33:34.164 + Mar 7 02:33:34.164: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5053 PodName:pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 02:33:34.164: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 02:33:34.164: INFO: ExecWithOptions: Clientset creation + Mar 7 02:33:34.164: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/emptydir-5053/pods/pod-sharedvolume-887a3f3d-260f-4772-968e-2cc3d3f14949/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) + Mar 7 02:33:34.224: INFO: Exec stderr: "" + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 02:33:34.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-5053" for this suite. 03/07/23 02:33:34.227 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:206 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:33:34.233 +Mar 7 02:33:34.233: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 02:33:34.234 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:34.245 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:34.248 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:206 +STEP: Creating a pod to test downward API volume plugin 03/07/23 02:33:34.251 +Mar 7 02:33:34.257: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801" in namespace "downward-api-87" to be "Succeeded or Failed" +Mar 7 02:33:34.260: INFO: Pod "downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801": Phase="Pending", Reason="", readiness=false. Elapsed: 3.164762ms +Mar 7 02:33:36.263: INFO: Pod "downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006298109s +Mar 7 02:33:38.264: INFO: Pod "downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007047915s +STEP: Saw pod success 03/07/23 02:33:38.264 +Mar 7 02:33:38.264: INFO: Pod "downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801" satisfied condition "Succeeded or Failed" +Mar 7 02:33:38.267: INFO: Trying to get logs from node node-2 pod downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801 container client-container: +STEP: delete the pod 03/07/23 02:33:38.271 +Mar 7 02:33:38.301: INFO: Waiting for pod downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801 to disappear +Mar 7 02:33:38.303: INFO: Pod downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 02:33:38.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-87" for this suite. 03/07/23 02:33:38.307 +{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","completed":38,"skipped":492,"failed":0} +------------------------------ +• [4.079 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:206 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:33:34.233 + Mar 7 02:33:34.233: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 02:33:34.234 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:34.245 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:34.248 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:206 + STEP: Creating a pod to test downward API volume plugin 03/07/23 02:33:34.251 + Mar 7 02:33:34.257: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801" in namespace "downward-api-87" to be "Succeeded or Failed" + Mar 7 02:33:34.260: INFO: Pod "downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801": Phase="Pending", Reason="", readiness=false. Elapsed: 3.164762ms + Mar 7 02:33:36.263: INFO: Pod "downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006298109s + Mar 7 02:33:38.264: INFO: Pod "downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007047915s + STEP: Saw pod success 03/07/23 02:33:38.264 + Mar 7 02:33:38.264: INFO: Pod "downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801" satisfied condition "Succeeded or Failed" + Mar 7 02:33:38.267: INFO: Trying to get logs from node node-2 pod downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801 container client-container: + STEP: delete the pod 03/07/23 02:33:38.271 + Mar 7 02:33:38.301: INFO: Waiting for pod downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801 to disappear + Mar 7 02:33:38.303: INFO: Pod downwardapi-volume-a161e4f6-f8dd-43b0-8aac-d9d83b7ad801 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 02:33:38.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-87" for this suite. 03/07/23 02:33:38.307 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1413 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:33:38.313 +Mar 7 02:33:38.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 02:33:38.314 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:38.332 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:38.333 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1413 +STEP: creating Agnhost RC 03/07/23 02:33:38.335 +Mar 7 02:33:38.335: INFO: namespace kubectl-6324 +Mar 7 02:33:38.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-6324 create -f -' +Mar 7 02:33:39.546: INFO: stderr: "" +Mar 7 02:33:39.546: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 03/07/23 02:33:39.546 +Mar 7 02:33:40.549: INFO: Selector matched 1 pods for map[app:agnhost] +Mar 7 02:33:40.549: INFO: Found 0 / 1 +Mar 7 02:33:41.549: INFO: Selector matched 1 pods for map[app:agnhost] +Mar 7 02:33:41.549: INFO: Found 1 / 1 +Mar 7 02:33:41.549: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Mar 7 02:33:41.552: INFO: Selector matched 1 pods for map[app:agnhost] +Mar 7 02:33:41.552: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Mar 7 02:33:41.552: INFO: wait on agnhost-primary startup in kubectl-6324 +Mar 7 02:33:41.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-6324 logs agnhost-primary-bzwrh agnhost-primary' +Mar 7 02:33:41.681: INFO: stderr: "" +Mar 7 02:33:41.681: INFO: stdout: "Paused\n" +STEP: exposing RC 03/07/23 02:33:41.681 +Mar 7 02:33:41.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-6324 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Mar 7 02:33:41.876: INFO: stderr: "" +Mar 7 02:33:41.876: INFO: stdout: "service/rm2 exposed\n" +Mar 7 02:33:41.881: INFO: Service rm2 in namespace kubectl-6324 found. +STEP: exposing service 03/07/23 02:33:43.886 +Mar 7 02:33:43.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-6324 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Mar 7 02:33:44.070: INFO: stderr: "" +Mar 7 02:33:44.070: INFO: stdout: "service/rm3 exposed\n" +Mar 7 02:33:44.074: INFO: Service rm3 in namespace kubectl-6324 found. +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 02:33:46.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6324" for this suite. 03/07/23 02:33:46.084 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","completed":39,"skipped":511,"failed":0} +------------------------------ +• [SLOW TEST] [7.776 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl expose + test/e2e/kubectl/kubectl.go:1407 + should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1413 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:33:38.313 + Mar 7 02:33:38.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 02:33:38.314 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:38.332 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:38.333 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1413 + STEP: creating Agnhost RC 03/07/23 02:33:38.335 + Mar 7 02:33:38.335: INFO: namespace kubectl-6324 + Mar 7 02:33:38.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-6324 create -f -' + Mar 7 02:33:39.546: INFO: stderr: "" + Mar 7 02:33:39.546: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 03/07/23 02:33:39.546 + Mar 7 02:33:40.549: INFO: Selector matched 1 pods for map[app:agnhost] + Mar 7 02:33:40.549: INFO: Found 0 / 1 + Mar 7 02:33:41.549: INFO: Selector matched 1 pods for map[app:agnhost] + Mar 7 02:33:41.549: INFO: Found 1 / 1 + Mar 7 02:33:41.549: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + Mar 7 02:33:41.552: INFO: Selector matched 1 pods for map[app:agnhost] + Mar 7 02:33:41.552: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Mar 7 02:33:41.552: INFO: wait on agnhost-primary startup in kubectl-6324 + Mar 7 02:33:41.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-6324 logs agnhost-primary-bzwrh agnhost-primary' + Mar 7 02:33:41.681: INFO: stderr: "" + Mar 7 02:33:41.681: INFO: stdout: "Paused\n" + STEP: exposing RC 03/07/23 02:33:41.681 + Mar 7 02:33:41.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-6324 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' + Mar 7 02:33:41.876: INFO: stderr: "" + Mar 7 02:33:41.876: INFO: stdout: "service/rm2 exposed\n" + Mar 7 02:33:41.881: INFO: Service rm2 in namespace kubectl-6324 found. + STEP: exposing service 03/07/23 02:33:43.886 + Mar 7 02:33:43.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-6324 expose service rm2 --name=rm3 --port=2345 --target-port=6379' + Mar 7 02:33:44.070: INFO: stderr: "" + Mar 7 02:33:44.070: INFO: stdout: "service/rm3 exposed\n" + Mar 7 02:33:44.074: INFO: Service rm3 in namespace kubectl-6324 found. + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 02:33:46.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-6324" for this suite. 03/07/23 02:33:46.084 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-apps] Job + should apply changes to a job status [Conformance] + test/e2e/apps/job.go:464 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:33:46.089 +Mar 7 02:33:46.089: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename job 03/07/23 02:33:46.09 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:46.101 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:46.104 +[It] should apply changes to a job status [Conformance] + test/e2e/apps/job.go:464 +STEP: Creating a job 03/07/23 02:33:46.108 +STEP: Ensure pods equal to paralellism count is attached to the job 03/07/23 02:33:46.112 +STEP: patching /status 03/07/23 02:33:50.116 +STEP: updating /status 03/07/23 02:33:50.12 +STEP: get /status 03/07/23 02:33:50.13 +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +Mar 7 02:33:50.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-9430" for this suite. 03/07/23 02:33:50.142 +{"msg":"PASSED [sig-apps] Job should apply changes to a job status [Conformance]","completed":40,"skipped":514,"failed":0} +------------------------------ +• [4.063 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should apply changes to a job status [Conformance] + test/e2e/apps/job.go:464 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:33:46.089 + Mar 7 02:33:46.089: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename job 03/07/23 02:33:46.09 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:46.101 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:46.104 + [It] should apply changes to a job status [Conformance] + test/e2e/apps/job.go:464 + STEP: Creating a job 03/07/23 02:33:46.108 + STEP: Ensure pods equal to paralellism count is attached to the job 03/07/23 02:33:46.112 + STEP: patching /status 03/07/23 02:33:50.116 + STEP: updating /status 03/07/23 02:33:50.12 + STEP: get /status 03/07/23 02:33:50.13 + [AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 + Mar 7 02:33:50.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "job-9430" for this suite. 03/07/23 02:33:50.142 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:373 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:33:50.152 +Mar 7 02:33:50.152: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename daemonsets 03/07/23 02:33:50.153 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:50.166 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:50.168 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:373 +Mar 7 02:33:50.184: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 02:33:50.188 +Mar 7 02:33:50.193: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 02:33:50.193: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 02:33:51.205: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Mar 7 02:33:51.205: INFO: Node node-1 is running 0 daemon pod, expected 1 +Mar 7 02:33:52.200: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Mar 7 02:33:52.200: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Update daemon pods image. 03/07/23 02:33:52.208 +STEP: Check that daemon pods images are updated. 03/07/23 02:33:52.219 +Mar 7 02:33:52.222: INFO: Wrong image for pod: daemon-set-c28v6. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. +Mar 7 02:33:52.222: INFO: Wrong image for pod: daemon-set-gmlfp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. +Mar 7 02:33:52.222: INFO: Wrong image for pod: daemon-set-xhrq4. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. +Mar 7 02:33:53.229: INFO: Wrong image for pod: daemon-set-c28v6. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. +Mar 7 02:33:53.229: INFO: Wrong image for pod: daemon-set-gmlfp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. +Mar 7 02:33:54.228: INFO: Wrong image for pod: daemon-set-c28v6. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. +Mar 7 02:33:54.228: INFO: Wrong image for pod: daemon-set-gmlfp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. +Mar 7 02:33:55.227: INFO: Pod daemon-set-5txd5 is not available +Mar 7 02:33:55.227: INFO: Wrong image for pod: daemon-set-c28v6. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. +Mar 7 02:33:55.227: INFO: Wrong image for pod: daemon-set-gmlfp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. +Mar 7 02:33:56.229: INFO: Pod daemon-set-4j2d7 is not available +Mar 7 02:33:56.229: INFO: Wrong image for pod: daemon-set-c28v6. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. +Mar 7 02:33:58.228: INFO: Pod daemon-set-fqccp is not available +STEP: Check that daemon pods are still running on every node of the cluster. 03/07/23 02:33:58.231 +Mar 7 02:33:58.238: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Mar 7 02:33:58.238: INFO: Node node-2 is running 0 daemon pod, expected 1 +Mar 7 02:33:59.244: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Mar 7 02:33:59.244: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" 03/07/23 02:33:59.256 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7513, will wait for the garbage collector to delete the pods 03/07/23 02:33:59.256 +Mar 7 02:33:59.313: INFO: Deleting DaemonSet.extensions daemon-set took: 3.939383ms +Mar 7 02:33:59.413: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.352986ms +Mar 7 02:34:02.116: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 02:34:02.116: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Mar 7 02:34:02.118: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"37127"},"items":null} + +Mar 7 02:34:02.120: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"37127"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +Mar 7 02:34:02.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-7513" for this suite. 03/07/23 02:34:02.132 +{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","completed":41,"skipped":520,"failed":0} +------------------------------ +• [SLOW TEST] [11.985 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:373 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:33:50.152 + Mar 7 02:33:50.152: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename daemonsets 03/07/23 02:33:50.153 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:33:50.166 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:33:50.168 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 + [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:373 + Mar 7 02:33:50.184: INFO: Creating simple daemon set daemon-set + STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 02:33:50.188 + Mar 7 02:33:50.193: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 02:33:50.193: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 02:33:51.205: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Mar 7 02:33:51.205: INFO: Node node-1 is running 0 daemon pod, expected 1 + Mar 7 02:33:52.200: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Mar 7 02:33:52.200: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Update daemon pods image. 03/07/23 02:33:52.208 + STEP: Check that daemon pods images are updated. 03/07/23 02:33:52.219 + Mar 7 02:33:52.222: INFO: Wrong image for pod: daemon-set-c28v6. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. + Mar 7 02:33:52.222: INFO: Wrong image for pod: daemon-set-gmlfp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. + Mar 7 02:33:52.222: INFO: Wrong image for pod: daemon-set-xhrq4. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. + Mar 7 02:33:53.229: INFO: Wrong image for pod: daemon-set-c28v6. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. + Mar 7 02:33:53.229: INFO: Wrong image for pod: daemon-set-gmlfp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. + Mar 7 02:33:54.228: INFO: Wrong image for pod: daemon-set-c28v6. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. + Mar 7 02:33:54.228: INFO: Wrong image for pod: daemon-set-gmlfp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. + Mar 7 02:33:55.227: INFO: Pod daemon-set-5txd5 is not available + Mar 7 02:33:55.227: INFO: Wrong image for pod: daemon-set-c28v6. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. + Mar 7 02:33:55.227: INFO: Wrong image for pod: daemon-set-gmlfp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. + Mar 7 02:33:56.229: INFO: Pod daemon-set-4j2d7 is not available + Mar 7 02:33:56.229: INFO: Wrong image for pod: daemon-set-c28v6. Expected: registry.k8s.io/e2e-test-images/agnhost:2.40, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-2. + Mar 7 02:33:58.228: INFO: Pod daemon-set-fqccp is not available + STEP: Check that daemon pods are still running on every node of the cluster. 03/07/23 02:33:58.231 + Mar 7 02:33:58.238: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Mar 7 02:33:58.238: INFO: Node node-2 is running 0 daemon pod, expected 1 + Mar 7 02:33:59.244: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Mar 7 02:33:59.244: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 + STEP: Deleting DaemonSet "daemon-set" 03/07/23 02:33:59.256 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7513, will wait for the garbage collector to delete the pods 03/07/23 02:33:59.256 + Mar 7 02:33:59.313: INFO: Deleting DaemonSet.extensions daemon-set took: 3.939383ms + Mar 7 02:33:59.413: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.352986ms + Mar 7 02:34:02.116: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 02:34:02.116: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Mar 7 02:34:02.118: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"37127"},"items":null} + + Mar 7 02:34:02.120: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"37127"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 + Mar 7 02:34:02.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "daemonsets-7513" for this suite. 03/07/23 02:34:02.132 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:55 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:34:02.138 +Mar 7 02:34:02.138: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 02:34:02.139 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:34:02.152 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:34:02.154 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:55 +STEP: Creating projection with secret that has name projected-secret-test-6470e42e-13b4-450a-8936-aa4a003d9b5f 03/07/23 02:34:02.156 +STEP: Creating a pod to test consume secrets 03/07/23 02:34:02.159 +Mar 7 02:34:02.165: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da" in namespace "projected-5908" to be "Succeeded or Failed" +Mar 7 02:34:02.169: INFO: Pod "pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133603ms +Mar 7 02:34:04.173: INFO: Pod "pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007949733s +Mar 7 02:34:06.173: INFO: Pod "pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007471729s +STEP: Saw pod success 03/07/23 02:34:06.173 +Mar 7 02:34:06.173: INFO: Pod "pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da" satisfied condition "Succeeded or Failed" +Mar 7 02:34:06.175: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da container projected-secret-volume-test: +STEP: delete the pod 03/07/23 02:34:06.179 +Mar 7 02:34:06.191: INFO: Waiting for pod pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da to disappear +Mar 7 02:34:06.197: INFO: Pod pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +Mar 7 02:34:06.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5908" for this suite. 03/07/23 02:34:06.201 +{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","completed":42,"skipped":520,"failed":0} +------------------------------ +• [4.068 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:55 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:34:02.138 + Mar 7 02:34:02.138: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 02:34:02.139 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:34:02.152 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:34:02.154 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:55 + STEP: Creating projection with secret that has name projected-secret-test-6470e42e-13b4-450a-8936-aa4a003d9b5f 03/07/23 02:34:02.156 + STEP: Creating a pod to test consume secrets 03/07/23 02:34:02.159 + Mar 7 02:34:02.165: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da" in namespace "projected-5908" to be "Succeeded or Failed" + Mar 7 02:34:02.169: INFO: Pod "pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133603ms + Mar 7 02:34:04.173: INFO: Pod "pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007949733s + Mar 7 02:34:06.173: INFO: Pod "pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007471729s + STEP: Saw pod success 03/07/23 02:34:06.173 + Mar 7 02:34:06.173: INFO: Pod "pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da" satisfied condition "Succeeded or Failed" + Mar 7 02:34:06.175: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da container projected-secret-volume-test: + STEP: delete the pod 03/07/23 02:34:06.179 + Mar 7 02:34:06.191: INFO: Waiting for pod pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da to disappear + Mar 7 02:34:06.197: INFO: Pod pod-projected-secrets-5c4b272a-af1d-4b6a-ad5f-f85e116882da no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 + Mar 7 02:34:06.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-5908" for this suite. 03/07/23 02:34:06.201 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1745 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:34:06.212 +Mar 7 02:34:06.213: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 02:34:06.214 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:34:06.228 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:34:06.23 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[BeforeEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1732 +[It] should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1745 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-2 03/07/23 02:34:06.232 +Mar 7 02:34:06.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1755 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Mar 7 02:34:06.324: INFO: stderr: "" +Mar 7 02:34:06.324: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running 03/07/23 02:34:06.324 +STEP: verifying the pod e2e-test-httpd-pod was created 03/07/23 02:34:11.376 +Mar 7 02:34:11.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1755 get pod e2e-test-httpd-pod -o json' +Mar 7 02:34:11.578: INFO: stderr: "" +Mar 7 02:34:11.578: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"57a48a3d471b309f78f8318e0f54d7d7fb91ce2c695c9f15dbbb85f4b721f8b4\",\n \"cni.projectcalico.org/podIP\": \"10.233.247.17/32\",\n \"cni.projectcalico.org/podIPs\": \"10.233.247.17/32\"\n },\n \"creationTimestamp\": \"2023-03-07T02:34:06Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1755\",\n \"resourceVersion\": \"37226\",\n \"uid\": \"0f176412-d810-40df-b675-95286a099cc8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-698h8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node-2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-698h8\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-07T02:34:06Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-07T02:34:08Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-07T02:34:08Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-07T02:34:06Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d472415443d58ad1b1beacec18c33aa7e8fe499ee9d176691f1db6da4d7328f7\",\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-03-07T02:34:07Z\"\n }\n }\n }\n ],\n \"hostIP\": \"192.168.1.102\",\n \"phase\": \"Running\",\n \"podIP\": \"10.233.247.17\",\n \"podIPs\": [\n {\n \"ip\": \"10.233.247.17\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-03-07T02:34:06Z\"\n }\n}\n" +STEP: replace the image in the pod 03/07/23 02:34:11.578 +Mar 7 02:34:11.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1755 replace -f -' +Mar 7 02:34:12.711: INFO: stderr: "" +Mar 7 02:34:12.711: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/busybox:1.29-2 03/07/23 02:34:12.711 +[AfterEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1736 +Mar 7 02:34:12.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1755 delete pods e2e-test-httpd-pod' +Mar 7 02:34:14.091: INFO: stderr: "" +Mar 7 02:34:14.091: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 02:34:14.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1755" for this suite. 03/07/23 02:34:14.095 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","completed":43,"skipped":577,"failed":0} +------------------------------ +• [SLOW TEST] [7.887 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl replace + test/e2e/kubectl/kubectl.go:1729 + should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1745 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:34:06.212 + Mar 7 02:34:06.213: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 02:34:06.214 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:34:06.228 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:34:06.23 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [BeforeEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1732 + [It] should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1745 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-2 03/07/23 02:34:06.232 + Mar 7 02:34:06.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1755 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' + Mar 7 02:34:06.324: INFO: stderr: "" + Mar 7 02:34:06.324: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: verifying the pod e2e-test-httpd-pod is running 03/07/23 02:34:06.324 + STEP: verifying the pod e2e-test-httpd-pod was created 03/07/23 02:34:11.376 + Mar 7 02:34:11.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1755 get pod e2e-test-httpd-pod -o json' + Mar 7 02:34:11.578: INFO: stderr: "" + Mar 7 02:34:11.578: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"57a48a3d471b309f78f8318e0f54d7d7fb91ce2c695c9f15dbbb85f4b721f8b4\",\n \"cni.projectcalico.org/podIP\": \"10.233.247.17/32\",\n \"cni.projectcalico.org/podIPs\": \"10.233.247.17/32\"\n },\n \"creationTimestamp\": \"2023-03-07T02:34:06Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1755\",\n \"resourceVersion\": \"37226\",\n \"uid\": \"0f176412-d810-40df-b675-95286a099cc8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-698h8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node-2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-698h8\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-07T02:34:06Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-07T02:34:08Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-07T02:34:08Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-07T02:34:06Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d472415443d58ad1b1beacec18c33aa7e8fe499ee9d176691f1db6da4d7328f7\",\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-03-07T02:34:07Z\"\n }\n }\n }\n ],\n \"hostIP\": \"192.168.1.102\",\n \"phase\": \"Running\",\n \"podIP\": \"10.233.247.17\",\n \"podIPs\": [\n {\n \"ip\": \"10.233.247.17\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-03-07T02:34:06Z\"\n }\n}\n" + STEP: replace the image in the pod 03/07/23 02:34:11.578 + Mar 7 02:34:11.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1755 replace -f -' + Mar 7 02:34:12.711: INFO: stderr: "" + Mar 7 02:34:12.711: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" + STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/busybox:1.29-2 03/07/23 02:34:12.711 + [AfterEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1736 + Mar 7 02:34:12.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1755 delete pods e2e-test-httpd-pod' + Mar 7 02:34:14.091: INFO: stderr: "" + Mar 7 02:34:14.091: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 02:34:14.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-1755" for this suite. 03/07/23 02:34:14.095 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:34:14.1 +Mar 7 02:34:14.100: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename gc 03/07/23 02:34:14.101 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:34:14.119 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:34:14.121 +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 +STEP: create the deployment 03/07/23 02:34:14.123 +STEP: Wait for the Deployment to create new ReplicaSet 03/07/23 02:34:14.126 +STEP: delete the deployment 03/07/23 02:34:14.638 +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs 03/07/23 02:34:14.656 +STEP: Gathering metrics 03/07/23 02:34:15.172 +Mar 7 02:34:15.186: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" +Mar 7 02:34:15.188: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 1.962087ms +Mar 7 02:34:15.188: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) +Mar 7 02:34:15.188: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" +E0307 02:34:16.232570 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:16.232570 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:17.265139 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:17.265139 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:19.308151 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:19.308151 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:20.327964 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:20.327964 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:21.349648 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:21.349648 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:23.397231 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:23.397231 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:24.425226 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:24.425226 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:25.448309 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:25.448309 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:27.496212 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:27.496212 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:30.561789 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:30.561789 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:31.582980 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:31.582980 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:32.629739 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:32.629739 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:33.655386 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:33.655386 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:34.673693 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:34.673693 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:35.702590 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:35.702590 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:36.729397 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:36.729397 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:37.471147 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:37.471147 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:38.492113 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:38.492113 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:39.514071 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:39.514071 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:41.559275 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:41.559275 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:42.589492 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:42.589492 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:43.610399 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:43.610399 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:44.633011 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:44.633011 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:45.657099 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:45.657099 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:48.470498 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:48.470498 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:50.514425 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:50.514425 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:51.535032 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:51.535032 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:52.559889 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:52.559889 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:54.601318 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:54.601318 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:55.624532 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:55.624532 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:56.644711 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:56.644711 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:57.668862 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:57.668862 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:58.689205 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:58.689205 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:59.471553 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:34:59.471553 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:00.491255 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:00.491255 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:02.539218 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:02.539218 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:04.589457 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:04.589457 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:05.616793 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:05.616793 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:06.638456 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:06.638456 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:08.680363 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:08.680363 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:11.489699 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:11.489699 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:12.510702 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:12.510702 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:13.531728 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:13.531728 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:14.551875 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:14.551875 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:17.617052 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:17.617052 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:19.665336 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:19.665336 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:20.687385 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:20.687385 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:21.469553 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:21.469553 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:22.490226 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:22.490226 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:23.511120 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:23.511120 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:24.531920 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:24.531920 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:26.574705 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:26.574705 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:27.599782 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:27.599782 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:28.619687 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:28.619687 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:29.639925 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:29.639925 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:30.660735 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:30.660735 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:31.702532 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:31.702532 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:32.729339 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:32.729339 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:33.757009 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:33.757009 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:36.824106 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:36.824106 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:37.842910 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:37.842910 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:40.908550 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:40.908550 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:41.931659 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 02:35:41.931659 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +Mar 7 02:35:41.931: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +Mar 7 02:35:41.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1612" for this suite. 03/07/23 02:35:41.936 +{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","completed":44,"skipped":579,"failed":0} +------------------------------ +• [SLOW TEST] [87.842 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:34:14.1 + Mar 7 02:34:14.100: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename gc 03/07/23 02:34:14.101 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:34:14.119 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:34:14.121 + [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 + STEP: create the deployment 03/07/23 02:34:14.123 + STEP: Wait for the Deployment to create new ReplicaSet 03/07/23 02:34:14.126 + STEP: delete the deployment 03/07/23 02:34:14.638 + STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs 03/07/23 02:34:14.656 + STEP: Gathering metrics 03/07/23 02:34:15.172 + Mar 7 02:34:15.186: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" + Mar 7 02:34:15.188: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 1.962087ms + Mar 7 02:34:15.188: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) + Mar 7 02:34:15.188: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" + E0307 02:34:16.232570 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:17.265139 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:19.308151 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:20.327964 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:21.349648 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:23.397231 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:24.425226 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:25.448309 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:27.496212 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:30.561789 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:31.582980 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:32.629739 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:33.655386 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:34.673693 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:35.702590 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:36.729397 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:37.471147 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:38.492113 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:39.514071 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:41.559275 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:42.589492 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:43.610399 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:44.633011 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:45.657099 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:48.470498 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:50.514425 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:51.535032 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:52.559889 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:54.601318 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:55.624532 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:56.644711 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:57.668862 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:58.689205 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:34:59.471553 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:00.491255 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:02.539218 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:04.589457 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:05.616793 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:06.638456 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:08.680363 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:11.489699 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:12.510702 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:13.531728 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:14.551875 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:17.617052 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:19.665336 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:20.687385 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:21.469553 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:22.490226 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:23.511120 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:24.531920 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:26.574705 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:27.599782 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:28.619687 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:29.639925 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:30.660735 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:31.702532 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:32.729339 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:33.757009 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:36.824106 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:37.842910 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:40.908550 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 02:35:41.931659 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + Mar 7 02:35:41.931: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 + Mar 7 02:35:41.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "gc-1612" for this suite. 03/07/23 02:35:41.936 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 +[BeforeEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:35:41.943 +Mar 7 02:35:41.943: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename events 03/07/23 02:35:41.944 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:41.956 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:41.957 +[It] should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 +STEP: Create set of events 03/07/23 02:35:41.959 +Mar 7 02:35:41.965: INFO: created test-event-1 +Mar 7 02:35:41.967: INFO: created test-event-2 +Mar 7 02:35:41.970: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace 03/07/23 02:35:41.97 +STEP: delete collection of events 03/07/23 02:35:41.972 +Mar 7 02:35:41.972: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity 03/07/23 02:35:41.985 +Mar 7 02:35:41.985: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:187 +Mar 7 02:35:41.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-2907" for this suite. 03/07/23 02:35:41.99 +{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","completed":45,"skipped":580,"failed":0} +------------------------------ +• [0.052 seconds] +[sig-instrumentation] Events +test/e2e/instrumentation/common/framework.go:23 + should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:35:41.943 + Mar 7 02:35:41.943: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename events 03/07/23 02:35:41.944 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:41.956 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:41.957 + [It] should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 + STEP: Create set of events 03/07/23 02:35:41.959 + Mar 7 02:35:41.965: INFO: created test-event-1 + Mar 7 02:35:41.967: INFO: created test-event-2 + Mar 7 02:35:41.970: INFO: created test-event-3 + STEP: get a list of Events with a label in the current namespace 03/07/23 02:35:41.97 + STEP: delete collection of events 03/07/23 02:35:41.972 + Mar 7 02:35:41.972: INFO: requesting DeleteCollection of events + STEP: check that the list of events matches the requested quantity 03/07/23 02:35:41.985 + Mar 7 02:35:41.985: INFO: requesting list of events to confirm quantity + [AfterEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:187 + Mar 7 02:35:41.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "events-2907" for this suite. 03/07/23 02:35:41.99 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:38 +[BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:35:41.997 +Mar 7 02:35:41.997: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename containers 03/07/23 02:35:41.998 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:42.013 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:42.016 +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:38 +Mar 7 02:35:42.023: INFO: Waiting up to 5m0s for pod "client-containers-dd000090-ad80-4945-b52a-fccfbbb9a1f1" in namespace "containers-2826" to be "running" +Mar 7 02:35:42.026: INFO: Pod "client-containers-dd000090-ad80-4945-b52a-fccfbbb9a1f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.731165ms +Mar 7 02:35:44.030: INFO: Pod "client-containers-dd000090-ad80-4945-b52a-fccfbbb9a1f1": Phase="Running", Reason="", readiness=true. Elapsed: 2.006574271s +Mar 7 02:35:44.030: INFO: Pod "client-containers-dd000090-ad80-4945-b52a-fccfbbb9a1f1" satisfied condition "running" +[AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:187 +Mar 7 02:35:44.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-2826" for this suite. 03/07/23 02:35:44.039 +{"msg":"PASSED [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","completed":46,"skipped":630,"failed":0} +------------------------------ +• [2.048 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:38 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:35:41.997 + Mar 7 02:35:41.997: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename containers 03/07/23 02:35:41.998 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:42.013 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:42.016 + [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:38 + Mar 7 02:35:42.023: INFO: Waiting up to 5m0s for pod "client-containers-dd000090-ad80-4945-b52a-fccfbbb9a1f1" in namespace "containers-2826" to be "running" + Mar 7 02:35:42.026: INFO: Pod "client-containers-dd000090-ad80-4945-b52a-fccfbbb9a1f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.731165ms + Mar 7 02:35:44.030: INFO: Pod "client-containers-dd000090-ad80-4945-b52a-fccfbbb9a1f1": Phase="Running", Reason="", readiness=true. Elapsed: 2.006574271s + Mar 7 02:35:44.030: INFO: Pod "client-containers-dd000090-ad80-4945-b52a-fccfbbb9a1f1" satisfied condition "running" + [AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:187 + Mar 7 02:35:44.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "containers-2826" for this suite. 03/07/23 02:35:44.039 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:98 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:35:44.046 +Mar 7 02:35:44.046: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 02:35:44.046 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:44.059 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:44.06 +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:98 +STEP: Creating configMap with name projected-configmap-test-volume-map-9117fe80-08ea-4ce4-9d68-285a6f88c70c 03/07/23 02:35:44.065 +STEP: Creating a pod to test consume configMaps 03/07/23 02:35:44.07 +Mar 7 02:35:44.079: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39" in namespace "projected-6017" to be "Succeeded or Failed" +Mar 7 02:35:44.082: INFO: Pod "pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.775707ms +Mar 7 02:35:46.086: INFO: Pod "pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006612311s +Mar 7 02:35:48.085: INFO: Pod "pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005770202s +STEP: Saw pod success 03/07/23 02:35:48.085 +Mar 7 02:35:48.085: INFO: Pod "pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39" satisfied condition "Succeeded or Failed" +Mar 7 02:35:48.089: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39 container agnhost-container: +STEP: delete the pod 03/07/23 02:35:48.094 +Mar 7 02:35:48.103: INFO: Waiting for pod pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39 to disappear +Mar 7 02:35:48.106: INFO: Pod pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +Mar 7 02:35:48.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6017" for this suite. 03/07/23 02:35:48.109 +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","completed":47,"skipped":652,"failed":0} +------------------------------ +• [4.068 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:98 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:35:44.046 + Mar 7 02:35:44.046: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 02:35:44.046 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:44.059 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:44.06 + [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:98 + STEP: Creating configMap with name projected-configmap-test-volume-map-9117fe80-08ea-4ce4-9d68-285a6f88c70c 03/07/23 02:35:44.065 + STEP: Creating a pod to test consume configMaps 03/07/23 02:35:44.07 + Mar 7 02:35:44.079: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39" in namespace "projected-6017" to be "Succeeded or Failed" + Mar 7 02:35:44.082: INFO: Pod "pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.775707ms + Mar 7 02:35:46.086: INFO: Pod "pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006612311s + Mar 7 02:35:48.085: INFO: Pod "pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005770202s + STEP: Saw pod success 03/07/23 02:35:48.085 + Mar 7 02:35:48.085: INFO: Pod "pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39" satisfied condition "Succeeded or Failed" + Mar 7 02:35:48.089: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39 container agnhost-container: + STEP: delete the pod 03/07/23 02:35:48.094 + Mar 7 02:35:48.103: INFO: Waiting for pod pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39 to disappear + Mar 7 02:35:48.106: INFO: Pod pod-projected-configmaps-ea95d2a0-004c-48dd-9e21-d68bc2d2ad39 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 + Mar 7 02:35:48.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-6017" for this suite. 03/07/23 02:35:48.109 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:35:48.115 +Mar 7 02:35:48.115: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename gc 03/07/23 02:35:48.115 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:48.126 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:48.128 +[It] should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 +Mar 7 02:35:48.155: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7d847b33-913c-4485-a1f5-f0a6cb913f03", Controller:(*bool)(0xc0044d415e), BlockOwnerDeletion:(*bool)(0xc0044d415f)}} +Mar 7 02:35:48.160: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6e6b47f4-c335-441d-8854-d41b591b8d62", Controller:(*bool)(0xc0035c5b9e), BlockOwnerDeletion:(*bool)(0xc0035c5b9f)}} +Mar 7 02:35:48.167: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"22cb93db-f10f-4048-83a4-d83b1ec97c16", Controller:(*bool)(0xc0035c5da6), BlockOwnerDeletion:(*bool)(0xc0035c5da7)}} +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +Mar 7 02:35:53.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-3895" for this suite. 03/07/23 02:35:53.182 +{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","completed":48,"skipped":671,"failed":0} +------------------------------ +• [SLOW TEST] [5.073 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:35:48.115 + Mar 7 02:35:48.115: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename gc 03/07/23 02:35:48.115 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:48.126 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:48.128 + [It] should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 + Mar 7 02:35:48.155: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7d847b33-913c-4485-a1f5-f0a6cb913f03", Controller:(*bool)(0xc0044d415e), BlockOwnerDeletion:(*bool)(0xc0044d415f)}} + Mar 7 02:35:48.160: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6e6b47f4-c335-441d-8854-d41b591b8d62", Controller:(*bool)(0xc0035c5b9e), BlockOwnerDeletion:(*bool)(0xc0035c5b9f)}} + Mar 7 02:35:48.167: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"22cb93db-f10f-4048-83a4-d83b1ec97c16", Controller:(*bool)(0xc0035c5da6), BlockOwnerDeletion:(*bool)(0xc0035c5da7)}} + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 + Mar 7 02:35:53.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "gc-3895" for this suite. 03/07/23 02:35:53.182 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 +[BeforeEach] [sig-api-machinery] server version + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:35:53.19 +Mar 7 02:35:53.190: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename server-version 03/07/23 02:35:53.191 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:53.204 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:53.206 +[It] should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 +STEP: Request ServerVersion 03/07/23 02:35:53.208 +STEP: Confirm major version 03/07/23 02:35:53.208 +Mar 7 02:35:53.208: INFO: Major version: 1 +STEP: Confirm minor version 03/07/23 02:35:53.208 +Mar 7 02:35:53.208: INFO: cleanMinorVersion: 25 +Mar 7 02:35:53.208: INFO: Minor version: 25 +[AfterEach] [sig-api-machinery] server version + test/e2e/framework/framework.go:187 +Mar 7 02:35:53.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-5836" for this suite. 03/07/23 02:35:53.211 +{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","completed":49,"skipped":700,"failed":0} +------------------------------ +• [0.026 seconds] +[sig-api-machinery] server version +test/e2e/apimachinery/framework.go:23 + should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] server version + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:35:53.19 + Mar 7 02:35:53.190: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename server-version 03/07/23 02:35:53.191 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:53.204 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:53.206 + [It] should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 + STEP: Request ServerVersion 03/07/23 02:35:53.208 + STEP: Confirm major version 03/07/23 02:35:53.208 + Mar 7 02:35:53.208: INFO: Major version: 1 + STEP: Confirm minor version 03/07/23 02:35:53.208 + Mar 7 02:35:53.208: INFO: cleanMinorVersion: 25 + Mar 7 02:35:53.208: INFO: Minor version: 25 + [AfterEach] [sig-api-machinery] server version + test/e2e/framework/framework.go:187 + Mar 7 02:35:53.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "server-version-5836" for this suite. 03/07/23 02:35:53.211 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:35:53.216 +Mar 7 02:35:53.216: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename cronjob 03/07/23 02:35:53.217 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:53.227 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:53.229 +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 +STEP: Creating a ForbidConcurrent cronjob 03/07/23 02:35:53.231 +STEP: Ensuring a job is scheduled 03/07/23 02:35:53.235 +STEP: Ensuring exactly one is scheduled 03/07/23 02:36:01.239 +STEP: Ensuring exactly one running job exists by listing jobs explicitly 03/07/23 02:36:01.241 +STEP: Ensuring no more jobs are scheduled 03/07/23 02:36:01.244 +STEP: Removing cronjob 03/07/23 02:41:01.25 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 +Mar 7 02:41:01.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-8579" for this suite. 03/07/23 02:41:01.258 +{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","completed":50,"skipped":722,"failed":0} +------------------------------ +• [SLOW TEST] [308.050 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:35:53.216 + Mar 7 02:35:53.216: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename cronjob 03/07/23 02:35:53.217 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:35:53.227 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:35:53.229 + [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 + STEP: Creating a ForbidConcurrent cronjob 03/07/23 02:35:53.231 + STEP: Ensuring a job is scheduled 03/07/23 02:35:53.235 + STEP: Ensuring exactly one is scheduled 03/07/23 02:36:01.239 + STEP: Ensuring exactly one running job exists by listing jobs explicitly 03/07/23 02:36:01.241 + STEP: Ensuring no more jobs are scheduled 03/07/23 02:36:01.244 + STEP: Removing cronjob 03/07/23 02:41:01.25 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 + Mar 7 02:41:01.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "cronjob-8579" for this suite. 03/07/23 02:41:01.258 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:248 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:41:01.267 +Mar 7 02:41:01.267: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 02:41:01.268 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:01.279 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:01.286 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:248 +STEP: Creating a pod to test downward API volume plugin 03/07/23 02:41:01.288 +Mar 7 02:41:01.297: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621" in namespace "downward-api-9810" to be "Succeeded or Failed" +Mar 7 02:41:01.300: INFO: Pod "downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621": Phase="Pending", Reason="", readiness=false. Elapsed: 2.825545ms +Mar 7 02:41:03.304: INFO: Pod "downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006524044s +Mar 7 02:41:05.304: INFO: Pod "downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006437823s +STEP: Saw pod success 03/07/23 02:41:05.304 +Mar 7 02:41:05.304: INFO: Pod "downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621" satisfied condition "Succeeded or Failed" +Mar 7 02:41:05.307: INFO: Trying to get logs from node node-2 pod downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621 container client-container: +STEP: delete the pod 03/07/23 02:41:05.32 +Mar 7 02:41:05.330: INFO: Waiting for pod downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621 to disappear +Mar 7 02:41:05.332: INFO: Pod downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 02:41:05.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9810" for this suite. 03/07/23 02:41:05.336 +{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","completed":51,"skipped":729,"failed":0} +------------------------------ +• [4.074 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:248 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:41:01.267 + Mar 7 02:41:01.267: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 02:41:01.268 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:01.279 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:01.286 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:248 + STEP: Creating a pod to test downward API volume plugin 03/07/23 02:41:01.288 + Mar 7 02:41:01.297: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621" in namespace "downward-api-9810" to be "Succeeded or Failed" + Mar 7 02:41:01.300: INFO: Pod "downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621": Phase="Pending", Reason="", readiness=false. Elapsed: 2.825545ms + Mar 7 02:41:03.304: INFO: Pod "downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006524044s + Mar 7 02:41:05.304: INFO: Pod "downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006437823s + STEP: Saw pod success 03/07/23 02:41:05.304 + Mar 7 02:41:05.304: INFO: Pod "downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621" satisfied condition "Succeeded or Failed" + Mar 7 02:41:05.307: INFO: Trying to get logs from node node-2 pod downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621 container client-container: + STEP: delete the pod 03/07/23 02:41:05.32 + Mar 7 02:41:05.330: INFO: Waiting for pod downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621 to disappear + Mar 7 02:41:05.332: INFO: Pod downwardapi-volume-e941b0f4-ed3e-4b45-9475-56eefbe57621 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 02:41:05.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-9810" for this suite. 03/07/23 02:41:05.336 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:247 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:41:05.342 +Mar 7 02:41:05.342: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-runtime 03/07/23 02:41:05.343 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:05.356 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:05.362 +[It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:247 +STEP: create the container 03/07/23 02:41:05.364 +STEP: wait for the container to reach Succeeded 03/07/23 02:41:05.371 +STEP: get the container status 03/07/23 02:41:09.39 +STEP: the container should be terminated 03/07/23 02:41:09.392 +STEP: the termination message should be set 03/07/23 02:41:09.392 +Mar 7 02:41:09.392: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container 03/07/23 02:41:09.392 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 +Mar 7 02:41:09.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-8303" for this suite. 03/07/23 02:41:09.409 +{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","completed":52,"skipped":740,"failed":0} +------------------------------ +• [4.073 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:43 + on terminated container + test/e2e/common/node/runtime.go:136 + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:247 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:41:05.342 + Mar 7 02:41:05.342: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-runtime 03/07/23 02:41:05.343 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:05.356 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:05.362 + [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:247 + STEP: create the container 03/07/23 02:41:05.364 + STEP: wait for the container to reach Succeeded 03/07/23 02:41:05.371 + STEP: get the container status 03/07/23 02:41:09.39 + STEP: the container should be terminated 03/07/23 02:41:09.392 + STEP: the termination message should be set 03/07/23 02:41:09.392 + Mar 7 02:41:09.392: INFO: Expected: &{OK} to match Container's Termination Message: OK -- + STEP: delete the container 03/07/23 02:41:09.392 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 + Mar 7 02:41:09.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-runtime-8303" for this suite. 03/07/23 02:41:09.409 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:152 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:41:09.415 +Mar 7 02:41:09.415: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-lifecycle-hook 03/07/23 02:41:09.415 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:09.426 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:09.428 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 +STEP: create the container to handle the HTTPGet hook request. 03/07/23 02:41:09.433 +Mar 7 02:41:09.439: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-9572" to be "running and ready" +Mar 7 02:41:09.442: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189502ms +Mar 7 02:41:09.442: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:41:11.445: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.006127963s +Mar 7 02:41:11.446: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Mar 7 02:41:11.446: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:152 +STEP: create the pod with lifecycle hook 03/07/23 02:41:11.447 +Mar 7 02:41:11.452: INFO: Waiting up to 5m0s for pod "pod-with-prestop-http-hook" in namespace "container-lifecycle-hook-9572" to be "running and ready" +Mar 7 02:41:11.455: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.990443ms +Mar 7 02:41:11.455: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:41:13.458: INFO: Pod "pod-with-prestop-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.006155156s +Mar 7 02:41:13.458: INFO: The phase of Pod pod-with-prestop-http-hook is Running (Ready = true) +Mar 7 02:41:13.458: INFO: Pod "pod-with-prestop-http-hook" satisfied condition "running and ready" +STEP: delete the pod with lifecycle hook 03/07/23 02:41:13.46 +Mar 7 02:41:13.487: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Mar 7 02:41:13.490: INFO: Pod pod-with-prestop-http-hook still exists +Mar 7 02:41:15.491: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Mar 7 02:41:15.494: INFO: Pod pod-with-prestop-http-hook still exists +Mar 7 02:41:17.490: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Mar 7 02:41:17.493: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook 03/07/23 02:41:17.493 +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 +Mar 7 02:41:17.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-9572" for this suite. 03/07/23 02:41:17.501 +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","completed":53,"skipped":745,"failed":0} +------------------------------ +• [SLOW TEST] [8.093 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:152 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:41:09.415 + Mar 7 02:41:09.415: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-lifecycle-hook 03/07/23 02:41:09.415 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:09.426 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:09.428 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 + STEP: create the container to handle the HTTPGet hook request. 03/07/23 02:41:09.433 + Mar 7 02:41:09.439: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-9572" to be "running and ready" + Mar 7 02:41:09.442: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189502ms + Mar 7 02:41:09.442: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:41:11.445: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.006127963s + Mar 7 02:41:11.446: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Mar 7 02:41:11.446: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:152 + STEP: create the pod with lifecycle hook 03/07/23 02:41:11.447 + Mar 7 02:41:11.452: INFO: Waiting up to 5m0s for pod "pod-with-prestop-http-hook" in namespace "container-lifecycle-hook-9572" to be "running and ready" + Mar 7 02:41:11.455: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.990443ms + Mar 7 02:41:11.455: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:41:13.458: INFO: Pod "pod-with-prestop-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.006155156s + Mar 7 02:41:13.458: INFO: The phase of Pod pod-with-prestop-http-hook is Running (Ready = true) + Mar 7 02:41:13.458: INFO: Pod "pod-with-prestop-http-hook" satisfied condition "running and ready" + STEP: delete the pod with lifecycle hook 03/07/23 02:41:13.46 + Mar 7 02:41:13.487: INFO: Waiting for pod pod-with-prestop-http-hook to disappear + Mar 7 02:41:13.490: INFO: Pod pod-with-prestop-http-hook still exists + Mar 7 02:41:15.491: INFO: Waiting for pod pod-with-prestop-http-hook to disappear + Mar 7 02:41:15.494: INFO: Pod pod-with-prestop-http-hook still exists + Mar 7 02:41:17.490: INFO: Waiting for pod pod-with-prestop-http-hook to disappear + Mar 7 02:41:17.493: INFO: Pod pod-with-prestop-http-hook no longer exists + STEP: check prestop hook 03/07/23 02:41:17.493 + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 + Mar 7 02:41:17.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-lifecycle-hook-9572" for this suite. 03/07/23 02:41:17.501 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:88 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:41:17.508 +Mar 7 02:41:17.508: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 02:41:17.509 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:17.522 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:17.525 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:88 +STEP: Creating configMap with name projected-configmap-test-volume-map-28bbc178-491e-4539-b475-203fe0623366 03/07/23 02:41:17.527 +STEP: Creating a pod to test consume configMaps 03/07/23 02:41:17.53 +Mar 7 02:41:17.537: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01" in namespace "projected-6725" to be "Succeeded or Failed" +Mar 7 02:41:17.543: INFO: Pod "pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01": Phase="Pending", Reason="", readiness=false. Elapsed: 5.794542ms +Mar 7 02:41:19.546: INFO: Pod "pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009207328s +Mar 7 02:41:21.546: INFO: Pod "pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009465008s +STEP: Saw pod success 03/07/23 02:41:21.546 +Mar 7 02:41:21.546: INFO: Pod "pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01" satisfied condition "Succeeded or Failed" +Mar 7 02:41:21.548: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01 container agnhost-container: +STEP: delete the pod 03/07/23 02:41:21.553 +Mar 7 02:41:21.562: INFO: Waiting for pod pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01 to disappear +Mar 7 02:41:21.564: INFO: Pod pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +Mar 7 02:41:21.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6725" for this suite. 03/07/23 02:41:21.567 +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","completed":54,"skipped":745,"failed":0} +------------------------------ +• [4.064 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:88 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:41:17.508 + Mar 7 02:41:17.508: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 02:41:17.509 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:17.522 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:17.525 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:88 + STEP: Creating configMap with name projected-configmap-test-volume-map-28bbc178-491e-4539-b475-203fe0623366 03/07/23 02:41:17.527 + STEP: Creating a pod to test consume configMaps 03/07/23 02:41:17.53 + Mar 7 02:41:17.537: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01" in namespace "projected-6725" to be "Succeeded or Failed" + Mar 7 02:41:17.543: INFO: Pod "pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01": Phase="Pending", Reason="", readiness=false. Elapsed: 5.794542ms + Mar 7 02:41:19.546: INFO: Pod "pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009207328s + Mar 7 02:41:21.546: INFO: Pod "pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009465008s + STEP: Saw pod success 03/07/23 02:41:21.546 + Mar 7 02:41:21.546: INFO: Pod "pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01" satisfied condition "Succeeded or Failed" + Mar 7 02:41:21.548: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01 container agnhost-container: + STEP: delete the pod 03/07/23 02:41:21.553 + Mar 7 02:41:21.562: INFO: Waiting for pod pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01 to disappear + Mar 7 02:41:21.564: INFO: Pod pod-projected-configmaps-bae15acb-29b3-49e2-ba7d-034f4323ce01 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 + Mar 7 02:41:21.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-6725" for this suite. 03/07/23 02:41:21.567 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:86 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:41:21.572 +Mar 7 02:41:21.572: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 02:41:21.573 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:21.586 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:21.588 +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:86 +STEP: Creating a pod to test emptydir volume type on tmpfs 03/07/23 02:41:21.59 +Mar 7 02:41:21.596: INFO: Waiting up to 5m0s for pod "pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e" in namespace "emptydir-6643" to be "Succeeded or Failed" +Mar 7 02:41:21.599: INFO: Pod "pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543449ms +Mar 7 02:41:23.602: INFO: Pod "pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005566176s +Mar 7 02:41:25.611: INFO: Pod "pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014611131s +STEP: Saw pod success 03/07/23 02:41:25.611 +Mar 7 02:41:25.611: INFO: Pod "pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e" satisfied condition "Succeeded or Failed" +Mar 7 02:41:25.613: INFO: Trying to get logs from node node-2 pod pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e container test-container: +STEP: delete the pod 03/07/23 02:41:25.617 +Mar 7 02:41:25.639: INFO: Waiting for pod pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e to disappear +Mar 7 02:41:25.641: INFO: Pod pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 02:41:25.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6643" for this suite. 03/07/23 02:41:25.644 +{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","completed":55,"skipped":747,"failed":0} +------------------------------ +• [4.076 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:86 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:41:21.572 + Mar 7 02:41:21.572: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 02:41:21.573 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:21.586 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:21.588 + [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:86 + STEP: Creating a pod to test emptydir volume type on tmpfs 03/07/23 02:41:21.59 + Mar 7 02:41:21.596: INFO: Waiting up to 5m0s for pod "pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e" in namespace "emptydir-6643" to be "Succeeded or Failed" + Mar 7 02:41:21.599: INFO: Pod "pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543449ms + Mar 7 02:41:23.602: INFO: Pod "pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005566176s + Mar 7 02:41:25.611: INFO: Pod "pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014611131s + STEP: Saw pod success 03/07/23 02:41:25.611 + Mar 7 02:41:25.611: INFO: Pod "pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e" satisfied condition "Succeeded or Failed" + Mar 7 02:41:25.613: INFO: Trying to get logs from node node-2 pod pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e container test-container: + STEP: delete the pod 03/07/23 02:41:25.617 + Mar 7 02:41:25.639: INFO: Waiting for pod pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e to disappear + Mar 7 02:41:25.641: INFO: Pod pod-2e1ee00c-27fe-4b82-b3cc-229ca7faa02e no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 02:41:25.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-6643" for this suite. 03/07/23 02:41:25.644 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:46 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:41:25.65 +Mar 7 02:41:25.650: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 02:41:25.651 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:25.666 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:25.668 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:46 +STEP: Creating configMap with name projected-configmap-test-volume-78149a90-bf43-4422-89bc-6a52e0a7bee9 03/07/23 02:41:25.67 +STEP: Creating a pod to test consume configMaps 03/07/23 02:41:25.673 +Mar 7 02:41:25.679: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17" in namespace "projected-667" to be "Succeeded or Failed" +Mar 7 02:41:25.686: INFO: Pod "pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.683117ms +Mar 7 02:41:27.689: INFO: Pod "pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009969162s +Mar 7 02:41:29.690: INFO: Pod "pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01076077s +STEP: Saw pod success 03/07/23 02:41:29.69 +Mar 7 02:41:29.690: INFO: Pod "pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17" satisfied condition "Succeeded or Failed" +Mar 7 02:41:29.693: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17 container agnhost-container: +STEP: delete the pod 03/07/23 02:41:29.698 +Mar 7 02:41:29.725: INFO: Waiting for pod pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17 to disappear +Mar 7 02:41:29.727: INFO: Pod pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +Mar 7 02:41:29.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-667" for this suite. 03/07/23 02:41:29.731 +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","completed":56,"skipped":795,"failed":0} +------------------------------ +• [4.086 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:46 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:41:25.65 + Mar 7 02:41:25.650: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 02:41:25.651 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:25.666 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:25.668 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:46 + STEP: Creating configMap with name projected-configmap-test-volume-78149a90-bf43-4422-89bc-6a52e0a7bee9 03/07/23 02:41:25.67 + STEP: Creating a pod to test consume configMaps 03/07/23 02:41:25.673 + Mar 7 02:41:25.679: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17" in namespace "projected-667" to be "Succeeded or Failed" + Mar 7 02:41:25.686: INFO: Pod "pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.683117ms + Mar 7 02:41:27.689: INFO: Pod "pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009969162s + Mar 7 02:41:29.690: INFO: Pod "pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01076077s + STEP: Saw pod success 03/07/23 02:41:29.69 + Mar 7 02:41:29.690: INFO: Pod "pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17" satisfied condition "Succeeded or Failed" + Mar 7 02:41:29.693: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17 container agnhost-container: + STEP: delete the pod 03/07/23 02:41:29.698 + Mar 7 02:41:29.725: INFO: Waiting for pod pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17 to disappear + Mar 7 02:41:29.727: INFO: Pod pod-projected-configmaps-790c92d2-a432-4ff0-9e4c-bdfbc6017b17 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 + Mar 7 02:41:29.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-667" for this suite. 03/07/23 02:41:29.731 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:214 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:41:29.737 +Mar 7 02:41:29.737: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 02:41:29.738 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:29.749 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:29.751 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:214 +STEP: Creating secret with name s-test-opt-del-5acae011-91a2-49e0-82ac-6c9de092049d 03/07/23 02:41:29.756 +STEP: Creating secret with name s-test-opt-upd-42960951-0aff-44d4-89ea-290675ddd5dc 03/07/23 02:41:29.759 +STEP: Creating the pod 03/07/23 02:41:29.762 +Mar 7 02:41:29.770: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f" in namespace "projected-7352" to be "running and ready" +Mar 7 02:41:29.776: INFO: Pod "pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066595ms +Mar 7 02:41:29.776: INFO: The phase of Pod pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:41:31.780: INFO: Pod "pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f": Phase="Running", Reason="", readiness=true. Elapsed: 2.009619024s +Mar 7 02:41:31.780: INFO: The phase of Pod pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f is Running (Ready = true) +Mar 7 02:41:31.780: INFO: Pod "pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f" satisfied condition "running and ready" +STEP: Deleting secret s-test-opt-del-5acae011-91a2-49e0-82ac-6c9de092049d 03/07/23 02:41:31.796 +STEP: Updating secret s-test-opt-upd-42960951-0aff-44d4-89ea-290675ddd5dc 03/07/23 02:41:31.801 +STEP: Creating secret with name s-test-opt-create-57f11625-95ed-44e5-b4f4-98860638b148 03/07/23 02:41:31.804 +STEP: waiting to observe update in volume 03/07/23 02:41:31.808 +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +Mar 7 02:41:33.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7352" for this suite. 03/07/23 02:41:33.835 +{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","completed":57,"skipped":814,"failed":0} +------------------------------ +• [4.103 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:214 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:41:29.737 + Mar 7 02:41:29.737: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 02:41:29.738 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:29.749 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:29.751 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:214 + STEP: Creating secret with name s-test-opt-del-5acae011-91a2-49e0-82ac-6c9de092049d 03/07/23 02:41:29.756 + STEP: Creating secret with name s-test-opt-upd-42960951-0aff-44d4-89ea-290675ddd5dc 03/07/23 02:41:29.759 + STEP: Creating the pod 03/07/23 02:41:29.762 + Mar 7 02:41:29.770: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f" in namespace "projected-7352" to be "running and ready" + Mar 7 02:41:29.776: INFO: Pod "pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066595ms + Mar 7 02:41:29.776: INFO: The phase of Pod pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:41:31.780: INFO: Pod "pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f": Phase="Running", Reason="", readiness=true. Elapsed: 2.009619024s + Mar 7 02:41:31.780: INFO: The phase of Pod pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f is Running (Ready = true) + Mar 7 02:41:31.780: INFO: Pod "pod-projected-secrets-10ffe5f2-b82d-4222-b57e-e9103c91434f" satisfied condition "running and ready" + STEP: Deleting secret s-test-opt-del-5acae011-91a2-49e0-82ac-6c9de092049d 03/07/23 02:41:31.796 + STEP: Updating secret s-test-opt-upd-42960951-0aff-44d4-89ea-290675ddd5dc 03/07/23 02:41:31.801 + STEP: Creating secret with name s-test-opt-create-57f11625-95ed-44e5-b4f4-98860638b148 03/07/23 02:41:31.804 + STEP: waiting to observe update in volume 03/07/23 02:41:31.808 + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 + Mar 7 02:41:33.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-7352" for this suite. 03/07/23 02:41:33.835 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:41:33.844 +Mar 7 02:41:33.844: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename dns 03/07/23 02:41:33.844 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:33.856 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:33.858 +[It] should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 +STEP: Creating a test headless service 03/07/23 02:41:33.86 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6319.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6319.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 72.118.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.118.72_udp@PTR;check="$$(dig +tcp +noall +answer +search 72.118.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.118.72_tcp@PTR;sleep 1; done + 03/07/23 02:41:33.876 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6319.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6319.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 72.118.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.118.72_udp@PTR;check="$$(dig +tcp +noall +answer +search 72.118.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.118.72_tcp@PTR;sleep 1; done + 03/07/23 02:41:33.877 +STEP: creating a pod to probe DNS 03/07/23 02:41:33.877 +STEP: submitting the pod to kubernetes 03/07/23 02:41:33.877 +Mar 7 02:41:33.887: INFO: Waiting up to 15m0s for pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e" in namespace "dns-6319" to be "running" +Mar 7 02:41:33.890: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.035223ms +Mar 7 02:41:35.893: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006402136s +Mar 7 02:41:37.895: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007960343s +Mar 7 02:41:39.893: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006839884s +Mar 7 02:41:41.893: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006274111s +Mar 7 02:41:43.892: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Running", Reason="", readiness=true. Elapsed: 10.005779566s +Mar 7 02:41:43.892: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e" satisfied condition "running" +STEP: retrieving the pod 03/07/23 02:41:43.892 +STEP: looking for the results for each expected name from probers 03/07/23 02:41:43.895 +Mar 7 02:41:43.898: INFO: Unable to read wheezy_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:43.900: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:43.905: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:43.911: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:43.922: INFO: Unable to read jessie_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:43.924: INFO: Unable to read jessie_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:43.926: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:43.928: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:43.937: INFO: Lookups using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e failed for: [wheezy_udp@dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_udp@dns-test-service.dns-6319.svc.cluster.local jessie_tcp@dns-test-service.dns-6319.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local] + +Mar 7 02:41:48.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:48.945: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:48.947: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:48.949: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:48.961: INFO: Unable to read jessie_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:48.963: INFO: Unable to read jessie_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:48.965: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:48.968: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:48.977: INFO: Lookups using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e failed for: [wheezy_udp@dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_udp@dns-test-service.dns-6319.svc.cluster.local jessie_tcp@dns-test-service.dns-6319.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local] + +Mar 7 02:41:53.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:53.947: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:53.950: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:53.952: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:53.963: INFO: Unable to read jessie_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:53.965: INFO: Unable to read jessie_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:53.967: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:53.970: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:53.978: INFO: Lookups using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e failed for: [wheezy_udp@dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_udp@dns-test-service.dns-6319.svc.cluster.local jessie_tcp@dns-test-service.dns-6319.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local] + +Mar 7 02:41:58.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:58.945: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:58.947: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:58.950: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:58.961: INFO: Unable to read jessie_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:58.963: INFO: Unable to read jessie_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:58.966: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:58.969: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:41:58.980: INFO: Lookups using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e failed for: [wheezy_udp@dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_udp@dns-test-service.dns-6319.svc.cluster.local jessie_tcp@dns-test-service.dns-6319.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local] + +Mar 7 02:42:03.946: INFO: Unable to read wheezy_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:42:03.949: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:42:03.951: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:42:03.953: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:42:03.966: INFO: Unable to read jessie_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:42:03.969: INFO: Unable to read jessie_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:42:03.974: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:42:03.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) +Mar 7 02:42:03.988: INFO: Lookups using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e failed for: [wheezy_udp@dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_udp@dns-test-service.dns-6319.svc.cluster.local jessie_tcp@dns-test-service.dns-6319.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local] + +Mar 7 02:42:08.975: INFO: DNS probes using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e succeeded + +STEP: deleting the pod 03/07/23 02:42:08.975 +STEP: deleting the test service 03/07/23 02:42:09.012 +STEP: deleting the test headless service 03/07/23 02:42:09.067 +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +Mar 7 02:42:09.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-6319" for this suite. 03/07/23 02:42:09.092 +{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","completed":58,"skipped":888,"failed":0} +------------------------------ +• [SLOW TEST] [35.257 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:41:33.844 + Mar 7 02:41:33.844: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename dns 03/07/23 02:41:33.844 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:41:33.856 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:41:33.858 + [It] should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 + STEP: Creating a test headless service 03/07/23 02:41:33.86 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6319.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6319.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 72.118.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.118.72_udp@PTR;check="$$(dig +tcp +noall +answer +search 72.118.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.118.72_tcp@PTR;sleep 1; done + 03/07/23 02:41:33.876 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6319.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6319.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6319.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6319.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6319.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 72.118.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.118.72_udp@PTR;check="$$(dig +tcp +noall +answer +search 72.118.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.118.72_tcp@PTR;sleep 1; done + 03/07/23 02:41:33.877 + STEP: creating a pod to probe DNS 03/07/23 02:41:33.877 + STEP: submitting the pod to kubernetes 03/07/23 02:41:33.877 + Mar 7 02:41:33.887: INFO: Waiting up to 15m0s for pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e" in namespace "dns-6319" to be "running" + Mar 7 02:41:33.890: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.035223ms + Mar 7 02:41:35.893: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006402136s + Mar 7 02:41:37.895: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007960343s + Mar 7 02:41:39.893: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006839884s + Mar 7 02:41:41.893: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006274111s + Mar 7 02:41:43.892: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e": Phase="Running", Reason="", readiness=true. Elapsed: 10.005779566s + Mar 7 02:41:43.892: INFO: Pod "dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e" satisfied condition "running" + STEP: retrieving the pod 03/07/23 02:41:43.892 + STEP: looking for the results for each expected name from probers 03/07/23 02:41:43.895 + Mar 7 02:41:43.898: INFO: Unable to read wheezy_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:43.900: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:43.905: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:43.911: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:43.922: INFO: Unable to read jessie_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:43.924: INFO: Unable to read jessie_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:43.926: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:43.928: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:43.937: INFO: Lookups using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e failed for: [wheezy_udp@dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_udp@dns-test-service.dns-6319.svc.cluster.local jessie_tcp@dns-test-service.dns-6319.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local] + + Mar 7 02:41:48.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:48.945: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:48.947: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:48.949: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:48.961: INFO: Unable to read jessie_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:48.963: INFO: Unable to read jessie_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:48.965: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:48.968: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:48.977: INFO: Lookups using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e failed for: [wheezy_udp@dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_udp@dns-test-service.dns-6319.svc.cluster.local jessie_tcp@dns-test-service.dns-6319.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local] + + Mar 7 02:41:53.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:53.947: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:53.950: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:53.952: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:53.963: INFO: Unable to read jessie_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:53.965: INFO: Unable to read jessie_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:53.967: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:53.970: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:53.978: INFO: Lookups using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e failed for: [wheezy_udp@dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_udp@dns-test-service.dns-6319.svc.cluster.local jessie_tcp@dns-test-service.dns-6319.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local] + + Mar 7 02:41:58.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:58.945: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:58.947: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:58.950: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:58.961: INFO: Unable to read jessie_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:58.963: INFO: Unable to read jessie_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:58.966: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:58.969: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:41:58.980: INFO: Lookups using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e failed for: [wheezy_udp@dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_udp@dns-test-service.dns-6319.svc.cluster.local jessie_tcp@dns-test-service.dns-6319.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local] + + Mar 7 02:42:03.946: INFO: Unable to read wheezy_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:42:03.949: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:42:03.951: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:42:03.953: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:42:03.966: INFO: Unable to read jessie_udp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:42:03.969: INFO: Unable to read jessie_tcp@dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:42:03.974: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:42:03.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local from pod dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e: the server could not find the requested resource (get pods dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e) + Mar 7 02:42:03.988: INFO: Lookups using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e failed for: [wheezy_udp@dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@dns-test-service.dns-6319.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_udp@dns-test-service.dns-6319.svc.cluster.local jessie_tcp@dns-test-service.dns-6319.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6319.svc.cluster.local] + + Mar 7 02:42:08.975: INFO: DNS probes using dns-6319/dns-test-82afb61e-4aaa-4883-a811-1baf6dbd822e succeeded + + STEP: deleting the pod 03/07/23 02:42:08.975 + STEP: deleting the test service 03/07/23 02:42:09.012 + STEP: deleting the test headless service 03/07/23 02:42:09.067 + [AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 + Mar 7 02:42:09.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "dns-6319" for this suite. 03/07/23 02:42:09.092 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:152 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:42:09.101 +Mar 7 02:42:09.101: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 02:42:09.102 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:42:09.14 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:42:09.143 +[It] works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:152 +Mar 7 02:42:09.146: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 03/07/23 02:42:14.356 +Mar 7 02:42:14.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-9177 --namespace=crd-publish-openapi-9177 create -f -' +Mar 7 02:42:15.284: INFO: stderr: "" +Mar 7 02:42:15.284: INFO: stdout: "e2e-test-crd-publish-openapi-1577-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Mar 7 02:42:15.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-9177 --namespace=crd-publish-openapi-9177 delete e2e-test-crd-publish-openapi-1577-crds test-cr' +Mar 7 02:42:15.467: INFO: stderr: "" +Mar 7 02:42:15.467: INFO: stdout: "e2e-test-crd-publish-openapi-1577-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Mar 7 02:42:15.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-9177 --namespace=crd-publish-openapi-9177 apply -f -' +Mar 7 02:42:15.697: INFO: stderr: "" +Mar 7 02:42:15.697: INFO: stdout: "e2e-test-crd-publish-openapi-1577-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Mar 7 02:42:15.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-9177 --namespace=crd-publish-openapi-9177 delete e2e-test-crd-publish-openapi-1577-crds test-cr' +Mar 7 02:42:15.859: INFO: stderr: "" +Mar 7 02:42:15.859: INFO: stdout: "e2e-test-crd-publish-openapi-1577-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema 03/07/23 02:42:15.859 +Mar 7 02:42:15.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-9177 explain e2e-test-crd-publish-openapi-1577-crds' +Mar 7 02:42:16.706: INFO: stderr: "" +Mar 7 02:42:16.706: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1577-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 02:42:20.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-9177" for this suite. 03/07/23 02:42:20.266 +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","completed":59,"skipped":891,"failed":0} +------------------------------ +• [SLOW TEST] [11.170 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:152 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:42:09.101 + Mar 7 02:42:09.101: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 02:42:09.102 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:42:09.14 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:42:09.143 + [It] works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:152 + Mar 7 02:42:09.146: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 03/07/23 02:42:14.356 + Mar 7 02:42:14.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-9177 --namespace=crd-publish-openapi-9177 create -f -' + Mar 7 02:42:15.284: INFO: stderr: "" + Mar 7 02:42:15.284: INFO: stdout: "e2e-test-crd-publish-openapi-1577-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" + Mar 7 02:42:15.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-9177 --namespace=crd-publish-openapi-9177 delete e2e-test-crd-publish-openapi-1577-crds test-cr' + Mar 7 02:42:15.467: INFO: stderr: "" + Mar 7 02:42:15.467: INFO: stdout: "e2e-test-crd-publish-openapi-1577-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" + Mar 7 02:42:15.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-9177 --namespace=crd-publish-openapi-9177 apply -f -' + Mar 7 02:42:15.697: INFO: stderr: "" + Mar 7 02:42:15.697: INFO: stdout: "e2e-test-crd-publish-openapi-1577-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" + Mar 7 02:42:15.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-9177 --namespace=crd-publish-openapi-9177 delete e2e-test-crd-publish-openapi-1577-crds test-cr' + Mar 7 02:42:15.859: INFO: stderr: "" + Mar 7 02:42:15.859: INFO: stdout: "e2e-test-crd-publish-openapi-1577-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR without validation schema 03/07/23 02:42:15.859 + Mar 7 02:42:15.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-9177 explain e2e-test-crd-publish-openapi-1577-crds' + Mar 7 02:42:16.706: INFO: stderr: "" + Mar 7 02:42:16.706: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1577-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 02:42:20.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-publish-openapi-9177" for this suite. 03/07/23 02:42:20.266 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:457 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:42:20.272 +Mar 7 02:42:20.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename init-container 03/07/23 02:42:20.273 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:42:20.291 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:42:20.293 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:457 +STEP: creating the pod 03/07/23 02:42:20.295 +Mar 7 02:42:20.295: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 +Mar 7 02:42:24.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-6634" for this suite. 03/07/23 02:42:24.278 +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","completed":60,"skipped":918,"failed":0} +------------------------------ +• [4.011 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:457 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:42:20.272 + Mar 7 02:42:20.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename init-container 03/07/23 02:42:20.273 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:42:20.291 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:42:20.293 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 + [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:457 + STEP: creating the pod 03/07/23 02:42:20.295 + Mar 7 02:42:20.295: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 + Mar 7 02:42:24.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "init-container-6634" for this suite. 03/07/23 02:42:24.278 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:42:24.286 +Mar 7 02:42:24.286: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir-wrapper 03/07/23 02:42:24.287 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:42:24.303 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:42:24.305 +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 +STEP: Creating 50 configmaps 03/07/23 02:42:24.307 +STEP: Creating RC which spawns configmap-volume pods 03/07/23 02:42:24.542 +Mar 7 02:42:24.653: INFO: Pod name wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa: Found 5 pods out of 5 +STEP: Ensuring each pod is running 03/07/23 02:42:24.653 +Mar 7 02:42:24.653: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:42:24.693: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 39.191031ms +Mar 7 02:42:26.696: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042682961s +Mar 7 02:42:28.696: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042648152s +Mar 7 02:42:30.699: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045291386s +Mar 7 02:42:32.697: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043111425s +Mar 7 02:42:34.697: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.043356178s +Mar 7 02:42:36.696: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.042780747s +Mar 7 02:42:38.697: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.043792427s +Mar 7 02:42:40.713: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Running", Reason="", readiness=true. Elapsed: 16.059619121s +Mar 7 02:42:40.713: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s" satisfied condition "running" +Mar 7 02:42:40.713: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-bc4ch" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:42:40.718: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-bc4ch": Phase="Running", Reason="", readiness=true. Elapsed: 4.99188ms +Mar 7 02:42:40.718: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-bc4ch" satisfied condition "running" +Mar 7 02:42:40.718: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-fmm9d" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:42:40.725: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-fmm9d": Phase="Running", Reason="", readiness=true. Elapsed: 7.183475ms +Mar 7 02:42:40.725: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-fmm9d" satisfied condition "running" +Mar 7 02:42:40.725: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-rccq2" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:42:40.736: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-rccq2": Phase="Running", Reason="", readiness=true. Elapsed: 10.811533ms +Mar 7 02:42:40.736: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-rccq2" satisfied condition "running" +Mar 7 02:42:40.736: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-vb5gj" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:42:40.742: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-vb5gj": Phase="Running", Reason="", readiness=true. Elapsed: 5.616812ms +Mar 7 02:42:40.742: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-vb5gj" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa in namespace emptydir-wrapper-8744, will wait for the garbage collector to delete the pods 03/07/23 02:42:40.742 +Mar 7 02:42:40.808: INFO: Deleting ReplicationController wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa took: 9.817334ms +Mar 7 02:42:41.108: INFO: Terminating ReplicationController wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa pods took: 300.516119ms +STEP: Creating RC which spawns configmap-volume pods 03/07/23 02:42:43.913 +Mar 7 02:42:43.924: INFO: Pod name wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572: Found 0 pods out of 5 +Mar 7 02:42:48.930: INFO: Pod name wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572: Found 5 pods out of 5 +STEP: Ensuring each pod is running 03/07/23 02:42:48.93 +Mar 7 02:42:48.930: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:42:48.932: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.596198ms +Mar 7 02:42:50.936: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00658658s +Mar 7 02:42:52.935: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005545001s +Mar 7 02:42:54.937: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006978966s +Mar 7 02:42:56.937: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007624489s +Mar 7 02:42:58.936: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Running", Reason="", readiness=true. Elapsed: 10.006557991s +Mar 7 02:42:58.936: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4" satisfied condition "running" +Mar 7 02:42:58.936: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-bk7tm" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:42:58.939: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-bk7tm": Phase="Running", Reason="", readiness=true. Elapsed: 2.379043ms +Mar 7 02:42:58.939: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-bk7tm" satisfied condition "running" +Mar 7 02:42:58.939: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-fph9m" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:42:58.942: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-fph9m": Phase="Running", Reason="", readiness=true. Elapsed: 2.827462ms +Mar 7 02:42:58.942: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-fph9m" satisfied condition "running" +Mar 7 02:42:58.942: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-j2l2q" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:42:58.944: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-j2l2q": Phase="Running", Reason="", readiness=true. Elapsed: 2.570484ms +Mar 7 02:42:58.944: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-j2l2q" satisfied condition "running" +Mar 7 02:42:58.944: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-t2bn7" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:42:58.946: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-t2bn7": Phase="Running", Reason="", readiness=true. Elapsed: 2.315835ms +Mar 7 02:42:58.946: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-t2bn7" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572 in namespace emptydir-wrapper-8744, will wait for the garbage collector to delete the pods 03/07/23 02:42:58.946 +Mar 7 02:42:59.006: INFO: Deleting ReplicationController wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572 took: 5.917642ms +Mar 7 02:42:59.106: INFO: Terminating ReplicationController wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572 pods took: 100.278807ms +STEP: Creating RC which spawns configmap-volume pods 03/07/23 02:43:01.811 +Mar 7 02:43:01.825: INFO: Pod name wrapped-volume-race-01985919-879b-4266-afad-a31555577577: Found 0 pods out of 5 +Mar 7 02:43:06.831: INFO: Pod name wrapped-volume-race-01985919-879b-4266-afad-a31555577577: Found 5 pods out of 5 +STEP: Ensuring each pod is running 03/07/23 02:43:06.831 +Mar 7 02:43:06.831: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:43:06.834: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.866609ms +Mar 7 02:43:08.838: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006649669s +Mar 7 02:43:10.837: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006195661s +Mar 7 02:43:12.839: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008191925s +Mar 7 02:43:14.838: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006684359s +Mar 7 02:43:16.839: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Running", Reason="", readiness=true. Elapsed: 10.007970779s +Mar 7 02:43:16.839: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq" satisfied condition "running" +Mar 7 02:43:16.839: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-cvf2m" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:43:16.841: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-cvf2m": Phase="Running", Reason="", readiness=true. Elapsed: 2.203974ms +Mar 7 02:43:16.841: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-cvf2m" satisfied condition "running" +Mar 7 02:43:16.841: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-m7s7c" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:43:16.844: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-m7s7c": Phase="Running", Reason="", readiness=true. Elapsed: 2.353192ms +Mar 7 02:43:16.844: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-m7s7c" satisfied condition "running" +Mar 7 02:43:16.844: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-xt9tr" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:43:16.846: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-xt9tr": Phase="Running", Reason="", readiness=true. Elapsed: 2.375504ms +Mar 7 02:43:16.846: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-xt9tr" satisfied condition "running" +Mar 7 02:43:16.846: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-zzbql" in namespace "emptydir-wrapper-8744" to be "running" +Mar 7 02:43:16.848: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-zzbql": Phase="Running", Reason="", readiness=true. Elapsed: 2.080135ms +Mar 7 02:43:16.848: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-zzbql" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-01985919-879b-4266-afad-a31555577577 in namespace emptydir-wrapper-8744, will wait for the garbage collector to delete the pods 03/07/23 02:43:16.848 +Mar 7 02:43:16.907: INFO: Deleting ReplicationController wrapped-volume-race-01985919-879b-4266-afad-a31555577577 took: 4.967748ms +Mar 7 02:43:17.008: INFO: Terminating ReplicationController wrapped-volume-race-01985919-879b-4266-afad-a31555577577 pods took: 101.102804ms +STEP: Cleaning up the configMaps 03/07/23 02:43:19.908 +[AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:187 +Mar 7 02:43:20.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-8744" for this suite. 03/07/23 02:43:20.103 +{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","completed":61,"skipped":924,"failed":0} +------------------------------ +• [SLOW TEST] [55.821 seconds] +[sig-storage] EmptyDir wrapper volumes +test/e2e/storage/utils/framework.go:23 + should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:42:24.286 + Mar 7 02:42:24.286: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir-wrapper 03/07/23 02:42:24.287 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:42:24.303 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:42:24.305 + [It] should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 + STEP: Creating 50 configmaps 03/07/23 02:42:24.307 + STEP: Creating RC which spawns configmap-volume pods 03/07/23 02:42:24.542 + Mar 7 02:42:24.653: INFO: Pod name wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa: Found 5 pods out of 5 + STEP: Ensuring each pod is running 03/07/23 02:42:24.653 + Mar 7 02:42:24.653: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:42:24.693: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 39.191031ms + Mar 7 02:42:26.696: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042682961s + Mar 7 02:42:28.696: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042648152s + Mar 7 02:42:30.699: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045291386s + Mar 7 02:42:32.697: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043111425s + Mar 7 02:42:34.697: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.043356178s + Mar 7 02:42:36.696: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.042780747s + Mar 7 02:42:38.697: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.043792427s + Mar 7 02:42:40.713: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s": Phase="Running", Reason="", readiness=true. Elapsed: 16.059619121s + Mar 7 02:42:40.713: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-6md9s" satisfied condition "running" + Mar 7 02:42:40.713: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-bc4ch" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:42:40.718: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-bc4ch": Phase="Running", Reason="", readiness=true. Elapsed: 4.99188ms + Mar 7 02:42:40.718: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-bc4ch" satisfied condition "running" + Mar 7 02:42:40.718: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-fmm9d" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:42:40.725: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-fmm9d": Phase="Running", Reason="", readiness=true. Elapsed: 7.183475ms + Mar 7 02:42:40.725: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-fmm9d" satisfied condition "running" + Mar 7 02:42:40.725: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-rccq2" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:42:40.736: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-rccq2": Phase="Running", Reason="", readiness=true. Elapsed: 10.811533ms + Mar 7 02:42:40.736: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-rccq2" satisfied condition "running" + Mar 7 02:42:40.736: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-vb5gj" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:42:40.742: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-vb5gj": Phase="Running", Reason="", readiness=true. Elapsed: 5.616812ms + Mar 7 02:42:40.742: INFO: Pod "wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa-vb5gj" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa in namespace emptydir-wrapper-8744, will wait for the garbage collector to delete the pods 03/07/23 02:42:40.742 + Mar 7 02:42:40.808: INFO: Deleting ReplicationController wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa took: 9.817334ms + Mar 7 02:42:41.108: INFO: Terminating ReplicationController wrapped-volume-race-22079cf1-47d1-4829-a8fd-d9eed3391efa pods took: 300.516119ms + STEP: Creating RC which spawns configmap-volume pods 03/07/23 02:42:43.913 + Mar 7 02:42:43.924: INFO: Pod name wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572: Found 0 pods out of 5 + Mar 7 02:42:48.930: INFO: Pod name wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572: Found 5 pods out of 5 + STEP: Ensuring each pod is running 03/07/23 02:42:48.93 + Mar 7 02:42:48.930: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:42:48.932: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.596198ms + Mar 7 02:42:50.936: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00658658s + Mar 7 02:42:52.935: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005545001s + Mar 7 02:42:54.937: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006978966s + Mar 7 02:42:56.937: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007624489s + Mar 7 02:42:58.936: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4": Phase="Running", Reason="", readiness=true. Elapsed: 10.006557991s + Mar 7 02:42:58.936: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-2gcf4" satisfied condition "running" + Mar 7 02:42:58.936: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-bk7tm" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:42:58.939: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-bk7tm": Phase="Running", Reason="", readiness=true. Elapsed: 2.379043ms + Mar 7 02:42:58.939: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-bk7tm" satisfied condition "running" + Mar 7 02:42:58.939: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-fph9m" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:42:58.942: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-fph9m": Phase="Running", Reason="", readiness=true. Elapsed: 2.827462ms + Mar 7 02:42:58.942: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-fph9m" satisfied condition "running" + Mar 7 02:42:58.942: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-j2l2q" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:42:58.944: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-j2l2q": Phase="Running", Reason="", readiness=true. Elapsed: 2.570484ms + Mar 7 02:42:58.944: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-j2l2q" satisfied condition "running" + Mar 7 02:42:58.944: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-t2bn7" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:42:58.946: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-t2bn7": Phase="Running", Reason="", readiness=true. Elapsed: 2.315835ms + Mar 7 02:42:58.946: INFO: Pod "wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572-t2bn7" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572 in namespace emptydir-wrapper-8744, will wait for the garbage collector to delete the pods 03/07/23 02:42:58.946 + Mar 7 02:42:59.006: INFO: Deleting ReplicationController wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572 took: 5.917642ms + Mar 7 02:42:59.106: INFO: Terminating ReplicationController wrapped-volume-race-b58b6a98-863f-46b7-9119-dd7703f9d572 pods took: 100.278807ms + STEP: Creating RC which spawns configmap-volume pods 03/07/23 02:43:01.811 + Mar 7 02:43:01.825: INFO: Pod name wrapped-volume-race-01985919-879b-4266-afad-a31555577577: Found 0 pods out of 5 + Mar 7 02:43:06.831: INFO: Pod name wrapped-volume-race-01985919-879b-4266-afad-a31555577577: Found 5 pods out of 5 + STEP: Ensuring each pod is running 03/07/23 02:43:06.831 + Mar 7 02:43:06.831: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:43:06.834: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.866609ms + Mar 7 02:43:08.838: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006649669s + Mar 7 02:43:10.837: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006195661s + Mar 7 02:43:12.839: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008191925s + Mar 7 02:43:14.838: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006684359s + Mar 7 02:43:16.839: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq": Phase="Running", Reason="", readiness=true. Elapsed: 10.007970779s + Mar 7 02:43:16.839: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-4fxmq" satisfied condition "running" + Mar 7 02:43:16.839: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-cvf2m" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:43:16.841: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-cvf2m": Phase="Running", Reason="", readiness=true. Elapsed: 2.203974ms + Mar 7 02:43:16.841: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-cvf2m" satisfied condition "running" + Mar 7 02:43:16.841: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-m7s7c" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:43:16.844: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-m7s7c": Phase="Running", Reason="", readiness=true. Elapsed: 2.353192ms + Mar 7 02:43:16.844: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-m7s7c" satisfied condition "running" + Mar 7 02:43:16.844: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-xt9tr" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:43:16.846: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-xt9tr": Phase="Running", Reason="", readiness=true. Elapsed: 2.375504ms + Mar 7 02:43:16.846: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-xt9tr" satisfied condition "running" + Mar 7 02:43:16.846: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-zzbql" in namespace "emptydir-wrapper-8744" to be "running" + Mar 7 02:43:16.848: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-zzbql": Phase="Running", Reason="", readiness=true. Elapsed: 2.080135ms + Mar 7 02:43:16.848: INFO: Pod "wrapped-volume-race-01985919-879b-4266-afad-a31555577577-zzbql" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-01985919-879b-4266-afad-a31555577577 in namespace emptydir-wrapper-8744, will wait for the garbage collector to delete the pods 03/07/23 02:43:16.848 + Mar 7 02:43:16.907: INFO: Deleting ReplicationController wrapped-volume-race-01985919-879b-4266-afad-a31555577577 took: 4.967748ms + Mar 7 02:43:17.008: INFO: Terminating ReplicationController wrapped-volume-race-01985919-879b-4266-afad-a31555577577 pods took: 101.102804ms + STEP: Cleaning up the configMaps 03/07/23 02:43:19.908 + [AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:187 + Mar 7 02:43:20.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-wrapper-8744" for this suite. 03/07/23 02:43:20.103 + << End Captured GinkgoWriter Output +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:43:20.108 +Mar 7 02:43:20.108: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename cronjob 03/07/23 02:43:20.109 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:43:20.123 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:43:20.124 +[It] should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 +STEP: Creating a suspended cronjob 03/07/23 02:43:20.126 +STEP: Ensuring no jobs are scheduled 03/07/23 02:43:20.13 +STEP: Ensuring no job exists by listing jobs explicitly 03/07/23 02:48:20.135 +STEP: Removing cronjob 03/07/23 02:48:20.137 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 +Mar 7 02:48:20.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-5064" for this suite. 03/07/23 02:48:20.145 +{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","completed":62,"skipped":924,"failed":0} +------------------------------ +• [SLOW TEST] [300.041 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:43:20.108 + Mar 7 02:43:20.108: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename cronjob 03/07/23 02:43:20.109 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:43:20.123 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:43:20.124 + [It] should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 + STEP: Creating a suspended cronjob 03/07/23 02:43:20.126 + STEP: Ensuring no jobs are scheduled 03/07/23 02:43:20.13 + STEP: Ensuring no job exists by listing jobs explicitly 03/07/23 02:48:20.135 + STEP: Removing cronjob 03/07/23 02:48:20.137 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 + Mar 7 02:48:20.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "cronjob-5064" for this suite. 03/07/23 02:48:20.145 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1404 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:48:20.151 +Mar 7 02:48:20.151: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 02:48:20.151 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:48:20.194 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:48:20.197 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1404 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-9216 03/07/23 02:48:20.199 +STEP: changing the ExternalName service to type=ClusterIP 03/07/23 02:48:20.201 +STEP: creating replication controller externalname-service in namespace services-9216 03/07/23 02:48:20.228 +I0307 02:48:20.237687 22 runners.go:193] Created replication controller with name: externalname-service, namespace: services-9216, replica count: 2 +I0307 02:48:23.287962 22 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 02:48:23.288: INFO: Creating new exec pod +Mar 7 02:48:23.292: INFO: Waiting up to 5m0s for pod "execpodlmxqn" in namespace "services-9216" to be "running" +Mar 7 02:48:23.296: INFO: Pod "execpodlmxqn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091446ms +Mar 7 02:48:25.300: INFO: Pod "execpodlmxqn": Phase="Running", Reason="", readiness=true. Elapsed: 2.00731539s +Mar 7 02:48:25.300: INFO: Pod "execpodlmxqn" satisfied condition "running" +Mar 7 02:48:26.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-9216 exec execpodlmxqn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Mar 7 02:48:26.539: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Mar 7 02:48:26.539: INFO: stdout: "" +Mar 7 02:48:27.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-9216 exec execpodlmxqn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Mar 7 02:48:27.721: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Mar 7 02:48:27.721: INFO: stdout: "externalname-service-frksk" +Mar 7 02:48:27.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-9216 exec execpodlmxqn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.99.30.60 80' +Mar 7 02:48:27.899: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.99.30.60 80\nConnection to 10.99.30.60 80 port [tcp/http] succeeded!\n" +Mar 7 02:48:27.899: INFO: stdout: "" +Mar 7 02:48:28.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-9216 exec execpodlmxqn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.99.30.60 80' +Mar 7 02:48:29.099: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.99.30.60 80\nConnection to 10.99.30.60 80 port [tcp/http] succeeded!\n" +Mar 7 02:48:29.099: INFO: stdout: "" +Mar 7 02:48:29.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-9216 exec execpodlmxqn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.99.30.60 80' +Mar 7 02:48:30.103: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.99.30.60 80\nConnection to 10.99.30.60 80 port [tcp/http] succeeded!\n" +Mar 7 02:48:30.103: INFO: stdout: "externalname-service-jrlh5" +Mar 7 02:48:30.103: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 02:48:30.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9216" for this suite. 03/07/23 02:48:30.131 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","completed":63,"skipped":957,"failed":0} +------------------------------ +• [SLOW TEST] [9.987 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1404 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:48:20.151 + Mar 7 02:48:20.151: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 02:48:20.151 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:48:20.194 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:48:20.197 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1404 + STEP: creating a service externalname-service with the type=ExternalName in namespace services-9216 03/07/23 02:48:20.199 + STEP: changing the ExternalName service to type=ClusterIP 03/07/23 02:48:20.201 + STEP: creating replication controller externalname-service in namespace services-9216 03/07/23 02:48:20.228 + I0307 02:48:20.237687 22 runners.go:193] Created replication controller with name: externalname-service, namespace: services-9216, replica count: 2 + I0307 02:48:23.287962 22 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 02:48:23.288: INFO: Creating new exec pod + Mar 7 02:48:23.292: INFO: Waiting up to 5m0s for pod "execpodlmxqn" in namespace "services-9216" to be "running" + Mar 7 02:48:23.296: INFO: Pod "execpodlmxqn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091446ms + Mar 7 02:48:25.300: INFO: Pod "execpodlmxqn": Phase="Running", Reason="", readiness=true. Elapsed: 2.00731539s + Mar 7 02:48:25.300: INFO: Pod "execpodlmxqn" satisfied condition "running" + Mar 7 02:48:26.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-9216 exec execpodlmxqn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' + Mar 7 02:48:26.539: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Mar 7 02:48:26.539: INFO: stdout: "" + Mar 7 02:48:27.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-9216 exec execpodlmxqn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' + Mar 7 02:48:27.721: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Mar 7 02:48:27.721: INFO: stdout: "externalname-service-frksk" + Mar 7 02:48:27.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-9216 exec execpodlmxqn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.99.30.60 80' + Mar 7 02:48:27.899: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.99.30.60 80\nConnection to 10.99.30.60 80 port [tcp/http] succeeded!\n" + Mar 7 02:48:27.899: INFO: stdout: "" + Mar 7 02:48:28.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-9216 exec execpodlmxqn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.99.30.60 80' + Mar 7 02:48:29.099: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.99.30.60 80\nConnection to 10.99.30.60 80 port [tcp/http] succeeded!\n" + Mar 7 02:48:29.099: INFO: stdout: "" + Mar 7 02:48:29.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-9216 exec execpodlmxqn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.99.30.60 80' + Mar 7 02:48:30.103: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.99.30.60 80\nConnection to 10.99.30.60 80 port [tcp/http] succeeded!\n" + Mar 7 02:48:30.103: INFO: stdout: "externalname-service-jrlh5" + Mar 7 02:48:30.103: INFO: Cleaning up the ExternalName to ClusterIP test service + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 02:48:30.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-9216" for this suite. 03/07/23 02:48:30.131 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:195 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:48:30.138 +Mar 7 02:48:30.138: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-probe 03/07/23 02:48:30.139 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:48:30.157 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:48:30.16 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:195 +STEP: Creating pod liveness-dbee10bf-78fc-46b6-914b-ef724acca49a in namespace container-probe-602 03/07/23 02:48:30.162 +Mar 7 02:48:30.183: INFO: Waiting up to 5m0s for pod "liveness-dbee10bf-78fc-46b6-914b-ef724acca49a" in namespace "container-probe-602" to be "not pending" +Mar 7 02:48:30.198: INFO: Pod "liveness-dbee10bf-78fc-46b6-914b-ef724acca49a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.923132ms +Mar 7 02:48:32.201: INFO: Pod "liveness-dbee10bf-78fc-46b6-914b-ef724acca49a": Phase="Running", Reason="", readiness=true. Elapsed: 2.018246269s +Mar 7 02:48:32.201: INFO: Pod "liveness-dbee10bf-78fc-46b6-914b-ef724acca49a" satisfied condition "not pending" +Mar 7 02:48:32.201: INFO: Started pod liveness-dbee10bf-78fc-46b6-914b-ef724acca49a in namespace container-probe-602 +STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 02:48:32.201 +Mar 7 02:48:32.203: INFO: Initial restart count of pod liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is 0 +Mar 7 02:48:52.247: INFO: Restart count of pod container-probe-602/liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is now 1 (20.043872951s elapsed) +Mar 7 02:49:12.283: INFO: Restart count of pod container-probe-602/liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is now 2 (40.080132238s elapsed) +Mar 7 02:49:32.320: INFO: Restart count of pod container-probe-602/liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is now 3 (1m0.117218641s elapsed) +Mar 7 02:49:52.358: INFO: Restart count of pod container-probe-602/liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is now 4 (1m20.154645549s elapsed) +Mar 7 02:50:58.473: INFO: Restart count of pod container-probe-602/liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is now 5 (2m26.270519917s elapsed) +STEP: deleting the pod 03/07/23 02:50:58.473 +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +Mar 7 02:50:58.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-602" for this suite. 03/07/23 02:50:58.49 +{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","completed":64,"skipped":973,"failed":0} +------------------------------ +• [SLOW TEST] [148.357 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:195 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:48:30.138 + Mar 7 02:48:30.138: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-probe 03/07/23 02:48:30.139 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:48:30.157 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:48:30.16 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 + [It] should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:195 + STEP: Creating pod liveness-dbee10bf-78fc-46b6-914b-ef724acca49a in namespace container-probe-602 03/07/23 02:48:30.162 + Mar 7 02:48:30.183: INFO: Waiting up to 5m0s for pod "liveness-dbee10bf-78fc-46b6-914b-ef724acca49a" in namespace "container-probe-602" to be "not pending" + Mar 7 02:48:30.198: INFO: Pod "liveness-dbee10bf-78fc-46b6-914b-ef724acca49a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.923132ms + Mar 7 02:48:32.201: INFO: Pod "liveness-dbee10bf-78fc-46b6-914b-ef724acca49a": Phase="Running", Reason="", readiness=true. Elapsed: 2.018246269s + Mar 7 02:48:32.201: INFO: Pod "liveness-dbee10bf-78fc-46b6-914b-ef724acca49a" satisfied condition "not pending" + Mar 7 02:48:32.201: INFO: Started pod liveness-dbee10bf-78fc-46b6-914b-ef724acca49a in namespace container-probe-602 + STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 02:48:32.201 + Mar 7 02:48:32.203: INFO: Initial restart count of pod liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is 0 + Mar 7 02:48:52.247: INFO: Restart count of pod container-probe-602/liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is now 1 (20.043872951s elapsed) + Mar 7 02:49:12.283: INFO: Restart count of pod container-probe-602/liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is now 2 (40.080132238s elapsed) + Mar 7 02:49:32.320: INFO: Restart count of pod container-probe-602/liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is now 3 (1m0.117218641s elapsed) + Mar 7 02:49:52.358: INFO: Restart count of pod container-probe-602/liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is now 4 (1m20.154645549s elapsed) + Mar 7 02:50:58.473: INFO: Restart count of pod container-probe-602/liveness-dbee10bf-78fc-46b6-914b-ef724acca49a is now 5 (2m26.270519917s elapsed) + STEP: deleting the pod 03/07/23 02:50:58.473 + [AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 + Mar 7 02:50:58.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-probe-602" for this suite. 03/07/23 02:50:58.49 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:737 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:50:58.496 +Mar 7 02:50:58.496: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename statefulset 03/07/23 02:50:58.496 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:50:58.516 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:50:58.519 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-5489 03/07/23 02:50:58.521 +[It] Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:737 +STEP: Looking for a node to schedule stateful set and pod 03/07/23 02:50:58.524 +STEP: Creating pod with conflicting port in namespace statefulset-5489 03/07/23 02:50:58.529 +STEP: Waiting until pod test-pod will start running in namespace statefulset-5489 03/07/23 02:50:58.543 +Mar 7 02:50:58.543: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-5489" to be "running" +Mar 7 02:50:58.547: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.858034ms +Mar 7 02:51:00.552: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008887319s +Mar 7 02:51:00.552: INFO: Pod "test-pod" satisfied condition "running" +STEP: Creating statefulset with conflicting port in namespace statefulset-5489 03/07/23 02:51:00.552 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5489 03/07/23 02:51:00.559 +Mar 7 02:51:00.578: INFO: Observed stateful pod in namespace: statefulset-5489, name: ss-0, uid: 5fc8b8dd-e8ce-4183-b349-485d2bac70f9, status phase: Pending. Waiting for statefulset controller to delete. +Mar 7 02:51:00.592: INFO: Observed stateful pod in namespace: statefulset-5489, name: ss-0, uid: 5fc8b8dd-e8ce-4183-b349-485d2bac70f9, status phase: Failed. Waiting for statefulset controller to delete. +Mar 7 02:51:00.598: INFO: Observed stateful pod in namespace: statefulset-5489, name: ss-0, uid: 5fc8b8dd-e8ce-4183-b349-485d2bac70f9, status phase: Failed. Waiting for statefulset controller to delete. +Mar 7 02:51:00.600: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5489 +STEP: Removing pod with conflicting port in namespace statefulset-5489 03/07/23 02:51:00.6 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5489 and will be in running state 03/07/23 02:51:00.615 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Mar 7 02:51:02.623: INFO: Deleting all statefulset in ns statefulset-5489 +Mar 7 02:51:02.625: INFO: Scaling statefulset ss to 0 +Mar 7 02:51:12.640: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 02:51:12.642: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +Mar 7 02:51:12.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5489" for this suite. 03/07/23 02:51:12.66 +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","completed":65,"skipped":979,"failed":0} +------------------------------ +• [SLOW TEST] [14.169 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:737 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:50:58.496 + Mar 7 02:50:58.496: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename statefulset 03/07/23 02:50:58.496 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:50:58.516 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:50:58.519 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 + STEP: Creating service test in namespace statefulset-5489 03/07/23 02:50:58.521 + [It] Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:737 + STEP: Looking for a node to schedule stateful set and pod 03/07/23 02:50:58.524 + STEP: Creating pod with conflicting port in namespace statefulset-5489 03/07/23 02:50:58.529 + STEP: Waiting until pod test-pod will start running in namespace statefulset-5489 03/07/23 02:50:58.543 + Mar 7 02:50:58.543: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-5489" to be "running" + Mar 7 02:50:58.547: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.858034ms + Mar 7 02:51:00.552: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008887319s + Mar 7 02:51:00.552: INFO: Pod "test-pod" satisfied condition "running" + STEP: Creating statefulset with conflicting port in namespace statefulset-5489 03/07/23 02:51:00.552 + STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5489 03/07/23 02:51:00.559 + Mar 7 02:51:00.578: INFO: Observed stateful pod in namespace: statefulset-5489, name: ss-0, uid: 5fc8b8dd-e8ce-4183-b349-485d2bac70f9, status phase: Pending. Waiting for statefulset controller to delete. + Mar 7 02:51:00.592: INFO: Observed stateful pod in namespace: statefulset-5489, name: ss-0, uid: 5fc8b8dd-e8ce-4183-b349-485d2bac70f9, status phase: Failed. Waiting for statefulset controller to delete. + Mar 7 02:51:00.598: INFO: Observed stateful pod in namespace: statefulset-5489, name: ss-0, uid: 5fc8b8dd-e8ce-4183-b349-485d2bac70f9, status phase: Failed. Waiting for statefulset controller to delete. + Mar 7 02:51:00.600: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5489 + STEP: Removing pod with conflicting port in namespace statefulset-5489 03/07/23 02:51:00.6 + STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5489 and will be in running state 03/07/23 02:51:00.615 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 + Mar 7 02:51:02.623: INFO: Deleting all statefulset in ns statefulset-5489 + Mar 7 02:51:02.625: INFO: Scaling statefulset ss to 0 + Mar 7 02:51:12.640: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 02:51:12.642: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 + Mar 7 02:51:12.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "statefulset-5489" for this suite. 03/07/23 02:51:12.66 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:51:12.666 +Mar 7 02:51:12.666: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename events 03/07/23 02:51:12.667 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:12.688 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:12.69 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 +STEP: creating a test event 03/07/23 02:51:12.692 +STEP: listing events in all namespaces 03/07/23 02:51:12.698 +STEP: listing events in test namespace 03/07/23 02:51:12.715 +STEP: listing events with field selection filtering on source 03/07/23 02:51:12.725 +STEP: listing events with field selection filtering on reportingController 03/07/23 02:51:12.727 +STEP: getting the test event 03/07/23 02:51:12.73 +STEP: patching the test event 03/07/23 02:51:12.732 +STEP: getting the test event 03/07/23 02:51:12.758 +STEP: updating the test event 03/07/23 02:51:12.773 +STEP: getting the test event 03/07/23 02:51:12.778 +STEP: deleting the test event 03/07/23 02:51:12.782 +STEP: listing events in all namespaces 03/07/23 02:51:12.788 +STEP: listing events in test namespace 03/07/23 02:51:12.799 +[AfterEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:187 +Mar 7 02:51:12.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-7044" for this suite. 03/07/23 02:51:12.804 +{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","completed":66,"skipped":984,"failed":0} +------------------------------ +• [0.143 seconds] +[sig-instrumentation] Events API +test/e2e/instrumentation/common/framework.go:23 + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:51:12.666 + Mar 7 02:51:12.666: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename events 03/07/23 02:51:12.667 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:12.688 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:12.69 + [BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 + [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 + STEP: creating a test event 03/07/23 02:51:12.692 + STEP: listing events in all namespaces 03/07/23 02:51:12.698 + STEP: listing events in test namespace 03/07/23 02:51:12.715 + STEP: listing events with field selection filtering on source 03/07/23 02:51:12.725 + STEP: listing events with field selection filtering on reportingController 03/07/23 02:51:12.727 + STEP: getting the test event 03/07/23 02:51:12.73 + STEP: patching the test event 03/07/23 02:51:12.732 + STEP: getting the test event 03/07/23 02:51:12.758 + STEP: updating the test event 03/07/23 02:51:12.773 + STEP: getting the test event 03/07/23 02:51:12.778 + STEP: deleting the test event 03/07/23 02:51:12.782 + STEP: listing events in all namespaces 03/07/23 02:51:12.788 + STEP: listing events in test namespace 03/07/23 02:51:12.799 + [AfterEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:187 + Mar 7 02:51:12.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "events-7044" for this suite. 03/07/23 02:51:12.804 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:108 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:51:12.813 +Mar 7 02:51:12.813: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 02:51:12.814 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:12.824 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:12.826 +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:108 +STEP: Creating configMap with name configmap-test-volume-map-e4929c62-c683-4de3-8c4d-01c330078a95 03/07/23 02:51:12.828 +STEP: Creating a pod to test consume configMaps 03/07/23 02:51:12.831 +Mar 7 02:51:12.839: INFO: Waiting up to 5m0s for pod "pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e" in namespace "configmap-4865" to be "Succeeded or Failed" +Mar 7 02:51:12.843: INFO: Pod "pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.661533ms +Mar 7 02:51:14.846: INFO: Pod "pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006815396s +Mar 7 02:51:16.847: INFO: Pod "pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008277806s +STEP: Saw pod success 03/07/23 02:51:16.847 +Mar 7 02:51:16.848: INFO: Pod "pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e" satisfied condition "Succeeded or Failed" +Mar 7 02:51:16.850: INFO: Trying to get logs from node node-2 pod pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e container agnhost-container: +STEP: delete the pod 03/07/23 02:51:16.863 +Mar 7 02:51:16.871: INFO: Waiting for pod pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e to disappear +Mar 7 02:51:16.874: INFO: Pod pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 02:51:16.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4865" for this suite. 03/07/23 02:51:16.877 +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","completed":67,"skipped":1133,"failed":0} +------------------------------ +• [4.068 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:108 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:51:12.813 + Mar 7 02:51:12.813: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 02:51:12.814 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:12.824 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:12.826 + [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:108 + STEP: Creating configMap with name configmap-test-volume-map-e4929c62-c683-4de3-8c4d-01c330078a95 03/07/23 02:51:12.828 + STEP: Creating a pod to test consume configMaps 03/07/23 02:51:12.831 + Mar 7 02:51:12.839: INFO: Waiting up to 5m0s for pod "pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e" in namespace "configmap-4865" to be "Succeeded or Failed" + Mar 7 02:51:12.843: INFO: Pod "pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.661533ms + Mar 7 02:51:14.846: INFO: Pod "pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006815396s + Mar 7 02:51:16.847: INFO: Pod "pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008277806s + STEP: Saw pod success 03/07/23 02:51:16.847 + Mar 7 02:51:16.848: INFO: Pod "pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e" satisfied condition "Succeeded or Failed" + Mar 7 02:51:16.850: INFO: Trying to get logs from node node-2 pod pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e container agnhost-container: + STEP: delete the pod 03/07/23 02:51:16.863 + Mar 7 02:51:16.871: INFO: Waiting for pod pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e to disappear + Mar 7 02:51:16.874: INFO: Pod pod-configmaps-983fad7a-eefe-4310-a84e-09ed5d284d5e no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 02:51:16.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-4865" for this suite. 03/07/23 02:51:16.877 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:51:16.882 +Mar 7 02:51:16.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename dns 03/07/23 02:51:16.883 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:16.896 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:16.898 +[It] should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... 03/07/23 02:51:16.9 +Mar 7 02:51:16.907: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-9435 6311b566-505b-469a-be80-d1f1d2bc1a68 42919 0 2023-03-07 02:51:16 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2023-03-07 02:51:16 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-f49f6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:registry.k8s.io/e2e-test-images/agnhost:2.40,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f49f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 02:51:16.907: INFO: Waiting up to 5m0s for pod "test-dns-nameservers" in namespace "dns-9435" to be "running and ready" +Mar 7 02:51:16.910: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485117ms +Mar 7 02:51:16.910: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:51:18.913: INFO: Pod "test-dns-nameservers": Phase="Running", Reason="", readiness=true. Elapsed: 2.006086985s +Mar 7 02:51:18.913: INFO: The phase of Pod test-dns-nameservers is Running (Ready = true) +Mar 7 02:51:18.913: INFO: Pod "test-dns-nameservers" satisfied condition "running and ready" +STEP: Verifying customized DNS suffix list is configured on pod... 03/07/23 02:51:18.913 +Mar 7 02:51:18.913: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9435 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 02:51:18.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 02:51:18.914: INFO: ExecWithOptions: Clientset creation +Mar 7 02:51:18.914: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/dns-9435/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +STEP: Verifying customized DNS server is configured on pod... 03/07/23 02:51:18.981 +Mar 7 02:51:18.981: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9435 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 02:51:18.982: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 02:51:18.982: INFO: ExecWithOptions: Clientset creation +Mar 7 02:51:18.982: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/dns-9435/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Mar 7 02:51:19.051: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +Mar 7 02:51:19.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9435" for this suite. 03/07/23 02:51:19.075 +{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","completed":68,"skipped":1140,"failed":0} +------------------------------ +• [2.198 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:51:16.882 + Mar 7 02:51:16.882: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename dns 03/07/23 02:51:16.883 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:16.896 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:16.898 + [It] should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 + STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... 03/07/23 02:51:16.9 + Mar 7 02:51:16.907: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-9435 6311b566-505b-469a-be80-d1f1d2bc1a68 42919 0 2023-03-07 02:51:16 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2023-03-07 02:51:16 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-f49f6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:registry.k8s.io/e2e-test-images/agnhost:2.40,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f49f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 02:51:16.907: INFO: Waiting up to 5m0s for pod "test-dns-nameservers" in namespace "dns-9435" to be "running and ready" + Mar 7 02:51:16.910: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485117ms + Mar 7 02:51:16.910: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:51:18.913: INFO: Pod "test-dns-nameservers": Phase="Running", Reason="", readiness=true. Elapsed: 2.006086985s + Mar 7 02:51:18.913: INFO: The phase of Pod test-dns-nameservers is Running (Ready = true) + Mar 7 02:51:18.913: INFO: Pod "test-dns-nameservers" satisfied condition "running and ready" + STEP: Verifying customized DNS suffix list is configured on pod... 03/07/23 02:51:18.913 + Mar 7 02:51:18.913: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9435 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 02:51:18.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 02:51:18.914: INFO: ExecWithOptions: Clientset creation + Mar 7 02:51:18.914: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/dns-9435/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + STEP: Verifying customized DNS server is configured on pod... 03/07/23 02:51:18.981 + Mar 7 02:51:18.981: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9435 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 02:51:18.982: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 02:51:18.982: INFO: ExecWithOptions: Clientset creation + Mar 7 02:51:18.982: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/dns-9435/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Mar 7 02:51:19.051: INFO: Deleting pod test-dns-nameservers... + [AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 + Mar 7 02:51:19.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "dns-9435" for this suite. 03/07/23 02:51:19.075 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:87 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:51:19.081 +Mar 7 02:51:19.081: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 02:51:19.082 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:19.099 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:19.101 +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:87 +STEP: Creating projection with secret that has name projected-secret-test-map-7814a807-6ff1-4cb1-928e-f8d719fe4887 03/07/23 02:51:19.103 +STEP: Creating a pod to test consume secrets 03/07/23 02:51:19.107 +Mar 7 02:51:19.121: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2" in namespace "projected-1879" to be "Succeeded or Failed" +Mar 7 02:51:19.134: INFO: Pod "pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.232516ms +Mar 7 02:51:21.138: INFO: Pod "pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017161071s +Mar 7 02:51:23.138: INFO: Pod "pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017036876s +STEP: Saw pod success 03/07/23 02:51:23.138 +Mar 7 02:51:23.138: INFO: Pod "pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2" satisfied condition "Succeeded or Failed" +Mar 7 02:51:23.141: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2 container projected-secret-volume-test: +STEP: delete the pod 03/07/23 02:51:23.146 +Mar 7 02:51:23.176: INFO: Waiting for pod pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2 to disappear +Mar 7 02:51:23.178: INFO: Pod pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +Mar 7 02:51:23.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1879" for this suite. 03/07/23 02:51:23.181 +{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","completed":69,"skipped":1140,"failed":0} +------------------------------ +• [4.105 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:87 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:51:19.081 + Mar 7 02:51:19.081: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 02:51:19.082 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:19.099 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:19.101 + [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:87 + STEP: Creating projection with secret that has name projected-secret-test-map-7814a807-6ff1-4cb1-928e-f8d719fe4887 03/07/23 02:51:19.103 + STEP: Creating a pod to test consume secrets 03/07/23 02:51:19.107 + Mar 7 02:51:19.121: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2" in namespace "projected-1879" to be "Succeeded or Failed" + Mar 7 02:51:19.134: INFO: Pod "pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.232516ms + Mar 7 02:51:21.138: INFO: Pod "pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017161071s + Mar 7 02:51:23.138: INFO: Pod "pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017036876s + STEP: Saw pod success 03/07/23 02:51:23.138 + Mar 7 02:51:23.138: INFO: Pod "pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2" satisfied condition "Succeeded or Failed" + Mar 7 02:51:23.141: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2 container projected-secret-volume-test: + STEP: delete the pod 03/07/23 02:51:23.146 + Mar 7 02:51:23.176: INFO: Waiting for pod pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2 to disappear + Mar 7 02:51:23.178: INFO: Pod pod-projected-secrets-667b40f3-acea-48f5-82b7-155b92d676d2 no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 + Mar 7 02:51:23.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-1879" for this suite. 03/07/23 02:51:23.181 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:116 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:51:23.187 +Mar 7 02:51:23.187: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 02:51:23.188 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:23.2 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:23.203 +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:116 +STEP: Creating a pod to test emptydir 0777 on tmpfs 03/07/23 02:51:23.205 +Mar 7 02:51:23.213: INFO: Waiting up to 5m0s for pod "pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b" in namespace "emptydir-4650" to be "Succeeded or Failed" +Mar 7 02:51:23.216: INFO: Pod "pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.782746ms +Mar 7 02:51:25.219: INFO: Pod "pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006119722s +Mar 7 02:51:27.220: INFO: Pod "pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006644989s +STEP: Saw pod success 03/07/23 02:51:27.22 +Mar 7 02:51:27.220: INFO: Pod "pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b" satisfied condition "Succeeded or Failed" +Mar 7 02:51:27.222: INFO: Trying to get logs from node node-2 pod pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b container test-container: +STEP: delete the pod 03/07/23 02:51:27.227 +Mar 7 02:51:27.238: INFO: Waiting for pod pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b to disappear +Mar 7 02:51:27.240: INFO: Pod pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 02:51:27.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4650" for this suite. 03/07/23 02:51:27.243 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":70,"skipped":1151,"failed":0} +------------------------------ +• [4.061 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:116 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:51:23.187 + Mar 7 02:51:23.187: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 02:51:23.188 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:23.2 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:23.203 + [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:116 + STEP: Creating a pod to test emptydir 0777 on tmpfs 03/07/23 02:51:23.205 + Mar 7 02:51:23.213: INFO: Waiting up to 5m0s for pod "pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b" in namespace "emptydir-4650" to be "Succeeded or Failed" + Mar 7 02:51:23.216: INFO: Pod "pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.782746ms + Mar 7 02:51:25.219: INFO: Pod "pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006119722s + Mar 7 02:51:27.220: INFO: Pod "pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006644989s + STEP: Saw pod success 03/07/23 02:51:27.22 + Mar 7 02:51:27.220: INFO: Pod "pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b" satisfied condition "Succeeded or Failed" + Mar 7 02:51:27.222: INFO: Trying to get logs from node node-2 pod pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b container test-container: + STEP: delete the pod 03/07/23 02:51:27.227 + Mar 7 02:51:27.238: INFO: Waiting for pod pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b to disappear + Mar 7 02:51:27.240: INFO: Pod pod-0a7dd6d7-06c5-4529-a32a-294a17f2048b no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 02:51:27.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-4650" for this suite. 03/07/23 02:51:27.243 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:206 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:51:27.25 +Mar 7 02:51:27.250: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 02:51:27.251 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:27.262 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:27.264 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:206 +STEP: Creating a pod to test downward API volume plugin 03/07/23 02:51:27.266 +Mar 7 02:51:27.272: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9" in namespace "projected-5946" to be "Succeeded or Failed" +Mar 7 02:51:27.274: INFO: Pod "downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.437352ms +Mar 7 02:51:29.279: INFO: Pod "downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006655021s +Mar 7 02:51:31.279: INFO: Pod "downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006666366s +STEP: Saw pod success 03/07/23 02:51:31.279 +Mar 7 02:51:31.279: INFO: Pod "downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9" satisfied condition "Succeeded or Failed" +Mar 7 02:51:31.281: INFO: Trying to get logs from node node-2 pod downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9 container client-container: +STEP: delete the pod 03/07/23 02:51:31.286 +Mar 7 02:51:31.323: INFO: Waiting for pod downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9 to disappear +Mar 7 02:51:31.325: INFO: Pod downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 02:51:31.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5946" for this suite. 03/07/23 02:51:31.328 +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","completed":71,"skipped":1167,"failed":0} +------------------------------ +• [4.083 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:206 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:51:27.25 + Mar 7 02:51:27.250: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 02:51:27.251 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:27.262 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:27.264 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:206 + STEP: Creating a pod to test downward API volume plugin 03/07/23 02:51:27.266 + Mar 7 02:51:27.272: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9" in namespace "projected-5946" to be "Succeeded or Failed" + Mar 7 02:51:27.274: INFO: Pod "downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.437352ms + Mar 7 02:51:29.279: INFO: Pod "downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006655021s + Mar 7 02:51:31.279: INFO: Pod "downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006666366s + STEP: Saw pod success 03/07/23 02:51:31.279 + Mar 7 02:51:31.279: INFO: Pod "downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9" satisfied condition "Succeeded or Failed" + Mar 7 02:51:31.281: INFO: Trying to get logs from node node-2 pod downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9 container client-container: + STEP: delete the pod 03/07/23 02:51:31.286 + Mar 7 02:51:31.323: INFO: Waiting for pod downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9 to disappear + Mar 7 02:51:31.325: INFO: Pod downwardapi-volume-1307578e-de66-4100-98a1-91722ea46ee9 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 02:51:31.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-5946" for this suite. 03/07/23 02:51:31.328 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:204 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:51:31.335 +Mar 7 02:51:31.335: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename endpointslice 03/07/23 02:51:31.336 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:31.348 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:31.35 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:204 +STEP: referencing a single matching pod 03/07/23 02:51:36.439 +STEP: referencing matching pods with named port 03/07/23 02:51:41.445 +STEP: creating empty Endpoints and EndpointSlices for no matching Pods 03/07/23 02:51:46.451 +STEP: recreating EndpointSlices after they've been deleted 03/07/23 02:51:51.459 +Mar 7 02:51:51.477: INFO: EndpointSlice for Service endpointslice-8236/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 +Mar 7 02:52:01.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-8236" for this suite. 03/07/23 02:52:01.488 +{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","completed":72,"skipped":1201,"failed":0} +------------------------------ +• [SLOW TEST] [30.158 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:204 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:51:31.335 + Mar 7 02:51:31.335: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename endpointslice 03/07/23 02:51:31.336 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:51:31.348 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:51:31.35 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 + [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:204 + STEP: referencing a single matching pod 03/07/23 02:51:36.439 + STEP: referencing matching pods with named port 03/07/23 02:51:41.445 + STEP: creating empty Endpoints and EndpointSlices for no matching Pods 03/07/23 02:51:46.451 + STEP: recreating EndpointSlices after they've been deleted 03/07/23 02:51:51.459 + Mar 7 02:51:51.477: INFO: EndpointSlice for Service endpointslice-8236/example-named-port not found + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 + Mar 7 02:52:01.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "endpointslice-8236" for this suite. 03/07/23 02:52:01.488 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:793 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:52:01.495 +Mar 7 02:52:01.495: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 02:52:01.496 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:52:01.523 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:52:01.526 +[It] should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:793 +STEP: Creating a ResourceQuota with best effort scope 03/07/23 02:52:01.528 +STEP: Ensuring ResourceQuota status is calculated 03/07/23 02:52:01.532 +STEP: Creating a ResourceQuota with not best effort scope 03/07/23 02:52:03.537 +STEP: Ensuring ResourceQuota status is calculated 03/07/23 02:52:03.578 +STEP: Creating a best-effort pod 03/07/23 02:52:05.582 +STEP: Ensuring resource quota with best effort scope captures the pod usage 03/07/23 02:52:05.66 +STEP: Ensuring resource quota with not best effort ignored the pod usage 03/07/23 02:52:07.664 +STEP: Deleting the pod 03/07/23 02:52:09.667 +STEP: Ensuring resource quota status released the pod usage 03/07/23 02:52:09.726 +STEP: Creating a not best-effort pod 03/07/23 02:52:11.728 +STEP: Ensuring resource quota with not best effort scope captures the pod usage 03/07/23 02:52:11.736 +STEP: Ensuring resource quota with best effort scope ignored the pod usage 03/07/23 02:52:13.739 +STEP: Deleting the pod 03/07/23 02:52:15.742 +STEP: Ensuring resource quota status released the pod usage 03/07/23 02:52:15.751 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 02:52:17.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-5669" for this suite. 03/07/23 02:52:17.758 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","completed":73,"skipped":1240,"failed":0} +------------------------------ +• [SLOW TEST] [16.292 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:793 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:52:01.495 + Mar 7 02:52:01.495: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 02:52:01.496 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:52:01.523 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:52:01.526 + [It] should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:793 + STEP: Creating a ResourceQuota with best effort scope 03/07/23 02:52:01.528 + STEP: Ensuring ResourceQuota status is calculated 03/07/23 02:52:01.532 + STEP: Creating a ResourceQuota with not best effort scope 03/07/23 02:52:03.537 + STEP: Ensuring ResourceQuota status is calculated 03/07/23 02:52:03.578 + STEP: Creating a best-effort pod 03/07/23 02:52:05.582 + STEP: Ensuring resource quota with best effort scope captures the pod usage 03/07/23 02:52:05.66 + STEP: Ensuring resource quota with not best effort ignored the pod usage 03/07/23 02:52:07.664 + STEP: Deleting the pod 03/07/23 02:52:09.667 + STEP: Ensuring resource quota status released the pod usage 03/07/23 02:52:09.726 + STEP: Creating a not best-effort pod 03/07/23 02:52:11.728 + STEP: Ensuring resource quota with not best effort scope captures the pod usage 03/07/23 02:52:11.736 + STEP: Ensuring resource quota with best effort scope ignored the pod usage 03/07/23 02:52:13.739 + STEP: Deleting the pod 03/07/23 02:52:15.742 + STEP: Ensuring resource quota status released the pod usage 03/07/23 02:52:15.751 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 02:52:17.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-5669" for this suite. 03/07/23 02:52:17.758 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:180 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:52:17.788 +Mar 7 02:52:17.788: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-probe 03/07/23 02:52:17.789 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:52:17.801 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:52:17.802 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:180 +STEP: Creating pod liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013 in namespace container-probe-5971 03/07/23 02:52:17.804 +Mar 7 02:52:17.810: INFO: Waiting up to 5m0s for pod "liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013" in namespace "container-probe-5971" to be "not pending" +Mar 7 02:52:17.813: INFO: Pod "liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.554994ms +Mar 7 02:52:19.822: INFO: Pod "liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013": Phase="Running", Reason="", readiness=true. Elapsed: 2.012137541s +Mar 7 02:52:19.822: INFO: Pod "liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013" satisfied condition "not pending" +Mar 7 02:52:19.822: INFO: Started pod liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013 in namespace container-probe-5971 +STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 02:52:19.822 +Mar 7 02:52:19.826: INFO: Initial restart count of pod liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013 is 0 +STEP: deleting the pod 03/07/23 02:56:20.339 +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +Mar 7 02:56:20.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5971" for this suite. 03/07/23 02:56:20.354 +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","completed":74,"skipped":1242,"failed":0} +------------------------------ +• [SLOW TEST] [242.572 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:180 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:52:17.788 + Mar 7 02:52:17.788: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-probe 03/07/23 02:52:17.789 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:52:17.801 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:52:17.802 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 + [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:180 + STEP: Creating pod liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013 in namespace container-probe-5971 03/07/23 02:52:17.804 + Mar 7 02:52:17.810: INFO: Waiting up to 5m0s for pod "liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013" in namespace "container-probe-5971" to be "not pending" + Mar 7 02:52:17.813: INFO: Pod "liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.554994ms + Mar 7 02:52:19.822: INFO: Pod "liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013": Phase="Running", Reason="", readiness=true. Elapsed: 2.012137541s + Mar 7 02:52:19.822: INFO: Pod "liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013" satisfied condition "not pending" + Mar 7 02:52:19.822: INFO: Started pod liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013 in namespace container-probe-5971 + STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 02:52:19.822 + Mar 7 02:52:19.826: INFO: Initial restart count of pod liveness-8fc4d03a-7394-499c-b7ad-a17fe9178013 is 0 + STEP: deleting the pod 03/07/23 02:56:20.339 + [AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 + Mar 7 02:56:20.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-probe-5971" for this suite. 03/07/23 02:56:20.354 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:56:20.362 +Mar 7 02:56:20.362: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sysctl 03/07/23 02:56:20.364 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:56:20.383 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:56:20.387 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl 03/07/23 02:56:20.389 +STEP: Watching for error events or started pod 03/07/23 02:56:20.399 +STEP: Waiting for pod completion 03/07/23 02:56:22.406 +Mar 7 02:56:22.406: INFO: Waiting up to 3m0s for pod "sysctl-f8eded0e-073d-479f-a680-672c8be8bc87" in namespace "sysctl-8745" to be "completed" +Mar 7 02:56:22.409: INFO: Pod "sysctl-f8eded0e-073d-479f-a680-672c8be8bc87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.581425ms +Mar 7 02:56:24.412: INFO: Pod "sysctl-f8eded0e-073d-479f-a680-672c8be8bc87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005858902s +Mar 7 02:56:24.412: INFO: Pod "sysctl-f8eded0e-073d-479f-a680-672c8be8bc87" satisfied condition "completed" +STEP: Checking that the pod succeeded 03/07/23 02:56:24.415 +STEP: Getting logs from the pod 03/07/23 02:56:24.415 +STEP: Checking that the sysctl is actually updated 03/07/23 02:56:24.428 +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:187 +Mar 7 02:56:24.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-8745" for this suite. 03/07/23 02:56:24.432 +{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","completed":75,"skipped":1270,"failed":0} +------------------------------ +• [4.075 seconds] +[sig-node] Sysctls [LinuxOnly] [NodeConformance] +test/e2e/common/node/framework.go:23 + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:56:20.362 + Mar 7 02:56:20.362: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sysctl 03/07/23 02:56:20.364 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:56:20.383 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:56:20.387 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 + [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 + STEP: Creating a pod with the kernel.shm_rmid_forced sysctl 03/07/23 02:56:20.389 + STEP: Watching for error events or started pod 03/07/23 02:56:20.399 + STEP: Waiting for pod completion 03/07/23 02:56:22.406 + Mar 7 02:56:22.406: INFO: Waiting up to 3m0s for pod "sysctl-f8eded0e-073d-479f-a680-672c8be8bc87" in namespace "sysctl-8745" to be "completed" + Mar 7 02:56:22.409: INFO: Pod "sysctl-f8eded0e-073d-479f-a680-672c8be8bc87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.581425ms + Mar 7 02:56:24.412: INFO: Pod "sysctl-f8eded0e-073d-479f-a680-672c8be8bc87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005858902s + Mar 7 02:56:24.412: INFO: Pod "sysctl-f8eded0e-073d-479f-a680-672c8be8bc87" satisfied condition "completed" + STEP: Checking that the pod succeeded 03/07/23 02:56:24.415 + STEP: Getting logs from the pod 03/07/23 02:56:24.415 + STEP: Checking that the sysctl is actually updated 03/07/23 02:56:24.428 + [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:187 + Mar 7 02:56:24.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sysctl-8745" for this suite. 03/07/23 02:56:24.432 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:298 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:56:24.438 +Mar 7 02:56:24.438: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename namespaces 03/07/23 02:56:24.439 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:56:24.451 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:56:24.453 +[It] should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:298 +STEP: Read namespace status 03/07/23 02:56:24.455 +Mar 7 02:56:24.458: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} +STEP: Patch namespace status 03/07/23 02:56:24.458 +Mar 7 02:56:24.462: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} +STEP: Update namespace status 03/07/23 02:56:24.462 +Mar 7 02:56:24.471: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 +Mar 7 02:56:24.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-1225" for this suite. 03/07/23 02:56:24.475 +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]","completed":76,"skipped":1277,"failed":0} +------------------------------ +• [0.041 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:298 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:56:24.438 + Mar 7 02:56:24.438: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename namespaces 03/07/23 02:56:24.439 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:56:24.451 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:56:24.453 + [It] should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:298 + STEP: Read namespace status 03/07/23 02:56:24.455 + Mar 7 02:56:24.458: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} + STEP: Patch namespace status 03/07/23 02:56:24.458 + Mar 7 02:56:24.462: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} + STEP: Update namespace status 03/07/23 02:56:24.462 + Mar 7 02:56:24.471: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 + Mar 7 02:56:24.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "namespaces-1225" for this suite. 03/07/23 02:56:24.475 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:531 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:56:24.48 +Mar 7 02:56:24.480: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename job 03/07/23 02:56:24.481 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:56:24.492 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:56:24.493 +[It] should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:531 +STEP: Creating a suspended job 03/07/23 02:56:24.497 +STEP: Patching the Job 03/07/23 02:56:24.501 +STEP: Watching for Job to be patched 03/07/23 02:56:24.512 +Mar 7 02:56:24.514: INFO: Event ADDED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking:] +Mar 7 02:56:24.514: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking:] +Mar 7 02:56:24.514: INFO: Event MODIFIED found for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking:] +STEP: Updating the job 03/07/23 02:56:24.514 +STEP: Watching for Job to be updated 03/07/23 02:56:24.52 +Mar 7 02:56:24.521: INFO: Event MODIFIED found for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Mar 7 02:56:24.521: INFO: Found Job annotations: map[string]string{"batch.kubernetes.io/job-tracking":"", "updated":"true"} +STEP: Listing all Jobs with LabelSelector 03/07/23 02:56:24.521 +Mar 7 02:56:24.524: INFO: Job: e2e-g4p8l as labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] +STEP: Waiting for job to complete 03/07/23 02:56:24.524 +STEP: Delete a job collection with a labelselector 03/07/23 02:56:32.528 +STEP: Watching for Job to be deleted 03/07/23 02:56:32.534 +Mar 7 02:56:32.536: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Mar 7 02:56:32.536: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Mar 7 02:56:32.536: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Mar 7 02:56:32.536: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Mar 7 02:56:32.536: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Mar 7 02:56:32.536: INFO: Event DELETED found for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +STEP: Relist jobs to confirm deletion 03/07/23 02:56:32.536 +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +Mar 7 02:56:32.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-6299" for this suite. 03/07/23 02:56:32.545 +{"msg":"PASSED [sig-apps] Job should manage the lifecycle of a job [Conformance]","completed":77,"skipped":1301,"failed":0} +------------------------------ +• [SLOW TEST] [8.070 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:531 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:56:24.48 + Mar 7 02:56:24.480: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename job 03/07/23 02:56:24.481 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:56:24.492 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:56:24.493 + [It] should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:531 + STEP: Creating a suspended job 03/07/23 02:56:24.497 + STEP: Patching the Job 03/07/23 02:56:24.501 + STEP: Watching for Job to be patched 03/07/23 02:56:24.512 + Mar 7 02:56:24.514: INFO: Event ADDED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking:] + Mar 7 02:56:24.514: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking:] + Mar 7 02:56:24.514: INFO: Event MODIFIED found for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking:] + STEP: Updating the job 03/07/23 02:56:24.514 + STEP: Watching for Job to be updated 03/07/23 02:56:24.52 + Mar 7 02:56:24.521: INFO: Event MODIFIED found for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Mar 7 02:56:24.521: INFO: Found Job annotations: map[string]string{"batch.kubernetes.io/job-tracking":"", "updated":"true"} + STEP: Listing all Jobs with LabelSelector 03/07/23 02:56:24.521 + Mar 7 02:56:24.524: INFO: Job: e2e-g4p8l as labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] + STEP: Waiting for job to complete 03/07/23 02:56:24.524 + STEP: Delete a job collection with a labelselector 03/07/23 02:56:32.528 + STEP: Watching for Job to be deleted 03/07/23 02:56:32.534 + Mar 7 02:56:32.536: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Mar 7 02:56:32.536: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Mar 7 02:56:32.536: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Mar 7 02:56:32.536: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Mar 7 02:56:32.536: INFO: Event MODIFIED observed for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Mar 7 02:56:32.536: INFO: Event DELETED found for Job e2e-g4p8l in namespace job-6299 with labels: map[e2e-g4p8l:patched e2e-job-label:e2e-g4p8l] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + STEP: Relist jobs to confirm deletion 03/07/23 02:56:32.536 + [AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 + Mar 7 02:56:32.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "job-6299" for this suite. 03/07/23 02:56:32.545 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:129 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:56:32.554 +Mar 7 02:56:32.554: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 02:56:32.555 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:56:32.574 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:56:32.581 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:129 +STEP: Creating the pod 03/07/23 02:56:32.583 +Mar 7 02:56:32.596: INFO: Waiting up to 5m0s for pod "labelsupdate4a696659-e340-4503-9194-274f48507997" in namespace "downward-api-5071" to be "running and ready" +Mar 7 02:56:32.604: INFO: Pod "labelsupdate4a696659-e340-4503-9194-274f48507997": Phase="Pending", Reason="", readiness=false. Elapsed: 7.260791ms +Mar 7 02:56:32.604: INFO: The phase of Pod labelsupdate4a696659-e340-4503-9194-274f48507997 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:56:34.607: INFO: Pod "labelsupdate4a696659-e340-4503-9194-274f48507997": Phase="Running", Reason="", readiness=true. Elapsed: 2.010700913s +Mar 7 02:56:34.607: INFO: The phase of Pod labelsupdate4a696659-e340-4503-9194-274f48507997 is Running (Ready = true) +Mar 7 02:56:34.607: INFO: Pod "labelsupdate4a696659-e340-4503-9194-274f48507997" satisfied condition "running and ready" +Mar 7 02:56:35.131: INFO: Successfully updated pod "labelsupdate4a696659-e340-4503-9194-274f48507997" +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 02:56:37.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5071" for this suite. 03/07/23 02:56:37.147 +{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","completed":78,"skipped":1333,"failed":0} +------------------------------ +• [4.598 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:129 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:56:32.554 + Mar 7 02:56:32.554: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 02:56:32.555 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:56:32.574 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:56:32.581 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:129 + STEP: Creating the pod 03/07/23 02:56:32.583 + Mar 7 02:56:32.596: INFO: Waiting up to 5m0s for pod "labelsupdate4a696659-e340-4503-9194-274f48507997" in namespace "downward-api-5071" to be "running and ready" + Mar 7 02:56:32.604: INFO: Pod "labelsupdate4a696659-e340-4503-9194-274f48507997": Phase="Pending", Reason="", readiness=false. Elapsed: 7.260791ms + Mar 7 02:56:32.604: INFO: The phase of Pod labelsupdate4a696659-e340-4503-9194-274f48507997 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:56:34.607: INFO: Pod "labelsupdate4a696659-e340-4503-9194-274f48507997": Phase="Running", Reason="", readiness=true. Elapsed: 2.010700913s + Mar 7 02:56:34.607: INFO: The phase of Pod labelsupdate4a696659-e340-4503-9194-274f48507997 is Running (Ready = true) + Mar 7 02:56:34.607: INFO: Pod "labelsupdate4a696659-e340-4503-9194-274f48507997" satisfied condition "running and ready" + Mar 7 02:56:35.131: INFO: Successfully updated pod "labelsupdate4a696659-e340-4503-9194-274f48507997" + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 02:56:37.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-5071" for this suite. 03/07/23 02:56:37.147 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:420 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:56:37.154 +Mar 7 02:56:37.155: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename taint-multiple-pods 03/07/23 02:56:37.155 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:56:37.167 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:56:37.169 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/node/taints.go:348 +Mar 7 02:56:37.171: INFO: Waiting up to 1m0s for all nodes to be ready +Mar 7 02:57:37.209: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:420 +Mar 7 02:57:37.213: INFO: Starting informer... +STEP: Starting pods... 03/07/23 02:57:37.213 +Mar 7 02:57:37.426: INFO: Pod1 is running on node-2. Tainting Node +Mar 7 02:57:37.634: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-7166" to be "running" +Mar 7 02:57:37.636: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576534ms +Mar 7 02:57:39.640: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 2.006320186s +Mar 7 02:57:39.640: INFO: Pod "taint-eviction-b1" satisfied condition "running" +Mar 7 02:57:39.640: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-7166" to be "running" +Mar 7 02:57:39.642: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 2.0716ms +Mar 7 02:57:39.642: INFO: Pod "taint-eviction-b2" satisfied condition "running" +Mar 7 02:57:39.642: INFO: Pod2 is running on node-2. Tainting Node +STEP: Trying to apply a taint on the Node 03/07/23 02:57:39.642 +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 03/07/23 02:57:39.672 +STEP: Waiting for Pod1 and Pod2 to be deleted 03/07/23 02:57:39.678 +Mar 7 02:57:45.547: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Mar 7 02:58:05.573: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 03/07/23 02:58:05.584 +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/framework.go:187 +Mar 7 02:58:05.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-7166" for this suite. 03/07/23 02:58:05.592 +{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","completed":79,"skipped":1411,"failed":0} +------------------------------ +• [SLOW TEST] [88.454 seconds] +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] +test/e2e/node/framework.go:23 + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:420 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:56:37.154 + Mar 7 02:56:37.155: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename taint-multiple-pods 03/07/23 02:56:37.155 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:56:37.167 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:56:37.169 + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/node/taints.go:348 + Mar 7 02:56:37.171: INFO: Waiting up to 1m0s for all nodes to be ready + Mar 7 02:57:37.209: INFO: Waiting for terminating namespaces to be deleted... + [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:420 + Mar 7 02:57:37.213: INFO: Starting informer... + STEP: Starting pods... 03/07/23 02:57:37.213 + Mar 7 02:57:37.426: INFO: Pod1 is running on node-2. Tainting Node + Mar 7 02:57:37.634: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-7166" to be "running" + Mar 7 02:57:37.636: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576534ms + Mar 7 02:57:39.640: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 2.006320186s + Mar 7 02:57:39.640: INFO: Pod "taint-eviction-b1" satisfied condition "running" + Mar 7 02:57:39.640: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-7166" to be "running" + Mar 7 02:57:39.642: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 2.0716ms + Mar 7 02:57:39.642: INFO: Pod "taint-eviction-b2" satisfied condition "running" + Mar 7 02:57:39.642: INFO: Pod2 is running on node-2. Tainting Node + STEP: Trying to apply a taint on the Node 03/07/23 02:57:39.642 + STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 03/07/23 02:57:39.672 + STEP: Waiting for Pod1 and Pod2 to be deleted 03/07/23 02:57:39.678 + Mar 7 02:57:45.547: INFO: Noticed Pod "taint-eviction-b1" gets evicted. + Mar 7 02:58:05.573: INFO: Noticed Pod "taint-eviction-b2" gets evicted. + STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 03/07/23 02:58:05.584 + [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/framework.go:187 + Mar 7 02:58:05.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "taint-multiple-pods-7166" for this suite. 03/07/23 02:58:05.592 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:617 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:58:05.611 +Mar 7 02:58:05.611: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 02:58:05.612 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:58:05.64 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:58:05.646 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:617 +Mar 7 02:58:05.648: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: creating the pod 03/07/23 02:58:05.649 +STEP: submitting the pod to kubernetes 03/07/23 02:58:05.649 +Mar 7 02:58:05.668: INFO: Waiting up to 5m0s for pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019" in namespace "pods-2498" to be "running and ready" +Mar 7 02:58:05.686: INFO: Pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019": Phase="Pending", Reason="", readiness=false. Elapsed: 17.858129ms +Mar 7 02:58:05.686: INFO: The phase of Pod pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:58:07.689: INFO: Pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02123722s +Mar 7 02:58:07.689: INFO: The phase of Pod pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:58:09.689: INFO: Pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021436008s +Mar 7 02:58:09.689: INFO: The phase of Pod pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 02:58:11.689: INFO: Pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019": Phase="Running", Reason="", readiness=true. Elapsed: 6.021286135s +Mar 7 02:58:11.689: INFO: The phase of Pod pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019 is Running (Ready = true) +Mar 7 02:58:11.689: INFO: Pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019" satisfied condition "running and ready" +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +Mar 7 02:58:11.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2498" for this suite. 03/07/23 02:58:11.715 +{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","completed":80,"skipped":1430,"failed":0} +------------------------------ +• [SLOW TEST] [6.109 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:617 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:58:05.611 + Mar 7 02:58:05.611: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 02:58:05.612 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:58:05.64 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:58:05.646 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 + [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:617 + Mar 7 02:58:05.648: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: creating the pod 03/07/23 02:58:05.649 + STEP: submitting the pod to kubernetes 03/07/23 02:58:05.649 + Mar 7 02:58:05.668: INFO: Waiting up to 5m0s for pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019" in namespace "pods-2498" to be "running and ready" + Mar 7 02:58:05.686: INFO: Pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019": Phase="Pending", Reason="", readiness=false. Elapsed: 17.858129ms + Mar 7 02:58:05.686: INFO: The phase of Pod pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:58:07.689: INFO: Pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02123722s + Mar 7 02:58:07.689: INFO: The phase of Pod pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:58:09.689: INFO: Pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021436008s + Mar 7 02:58:09.689: INFO: The phase of Pod pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 02:58:11.689: INFO: Pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019": Phase="Running", Reason="", readiness=true. Elapsed: 6.021286135s + Mar 7 02:58:11.689: INFO: The phase of Pod pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019 is Running (Ready = true) + Mar 7 02:58:11.689: INFO: Pod "pod-logs-websocket-252402bc-b2e1-46cf-976b-5bc570c8c019" satisfied condition "running and ready" + [AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 + Mar 7 02:58:11.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-2498" for this suite. 03/07/23 02:58:11.715 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:315 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:58:11.721 +Mar 7 02:58:11.721: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename statefulset 03/07/23 02:58:11.722 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:58:11.736 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:58:11.738 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-6328 03/07/23 02:58:11.741 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:315 +STEP: Creating a new StatefulSet 03/07/23 02:58:11.746 +Mar 7 02:58:11.755: INFO: Found 0 stateful pods, waiting for 3 +Mar 7 02:58:21.759: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 02:58:21.759: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 02:58:21.759: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-2 to registry.k8s.io/e2e-test-images/httpd:2.4.39-2 03/07/23 02:58:21.766 +Mar 7 02:58:21.783: INFO: Updating stateful set ss2 +STEP: Creating a new revision 03/07/23 02:58:21.783 +STEP: Not applying an update when the partition is greater than the number of replicas 03/07/23 02:58:31.8 +STEP: Performing a canary update 03/07/23 02:58:31.8 +Mar 7 02:58:31.819: INFO: Updating stateful set ss2 +Mar 7 02:58:31.825: INFO: Waiting for Pod statefulset-6328/ss2-2 to have revision ss2-5d8c6ff87d update revision ss2-6557876d87 +STEP: Restoring Pods to the correct revision when they are deleted 03/07/23 02:58:41.83 +Mar 7 02:58:41.871: INFO: Found 2 stateful pods, waiting for 3 +Mar 7 02:58:51.875: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 02:58:51.875: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 02:58:51.875: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update 03/07/23 02:58:51.88 +Mar 7 02:58:51.897: INFO: Updating stateful set ss2 +Mar 7 02:58:51.901: INFO: Waiting for Pod statefulset-6328/ss2-1 to have revision ss2-5d8c6ff87d update revision ss2-6557876d87 +Mar 7 02:59:01.925: INFO: Updating stateful set ss2 +Mar 7 02:59:01.930: INFO: Waiting for StatefulSet statefulset-6328/ss2 to complete update +Mar 7 02:59:01.930: INFO: Waiting for Pod statefulset-6328/ss2-0 to have revision ss2-5d8c6ff87d update revision ss2-6557876d87 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Mar 7 02:59:11.936: INFO: Deleting all statefulset in ns statefulset-6328 +Mar 7 02:59:11.938: INFO: Scaling statefulset ss2 to 0 +Mar 7 02:59:21.985: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 02:59:21.987: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +Mar 7 02:59:22.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-6328" for this suite. 03/07/23 02:59:22.03 +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","completed":81,"skipped":1470,"failed":0} +------------------------------ +• [SLOW TEST] [70.314 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:315 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:58:11.721 + Mar 7 02:58:11.721: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename statefulset 03/07/23 02:58:11.722 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:58:11.736 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:58:11.738 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 + STEP: Creating service test in namespace statefulset-6328 03/07/23 02:58:11.741 + [It] should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:315 + STEP: Creating a new StatefulSet 03/07/23 02:58:11.746 + Mar 7 02:58:11.755: INFO: Found 0 stateful pods, waiting for 3 + Mar 7 02:58:21.759: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 02:58:21.759: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 02:58:21.759: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-2 to registry.k8s.io/e2e-test-images/httpd:2.4.39-2 03/07/23 02:58:21.766 + Mar 7 02:58:21.783: INFO: Updating stateful set ss2 + STEP: Creating a new revision 03/07/23 02:58:21.783 + STEP: Not applying an update when the partition is greater than the number of replicas 03/07/23 02:58:31.8 + STEP: Performing a canary update 03/07/23 02:58:31.8 + Mar 7 02:58:31.819: INFO: Updating stateful set ss2 + Mar 7 02:58:31.825: INFO: Waiting for Pod statefulset-6328/ss2-2 to have revision ss2-5d8c6ff87d update revision ss2-6557876d87 + STEP: Restoring Pods to the correct revision when they are deleted 03/07/23 02:58:41.83 + Mar 7 02:58:41.871: INFO: Found 2 stateful pods, waiting for 3 + Mar 7 02:58:51.875: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 02:58:51.875: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 02:58:51.875: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Performing a phased rolling update 03/07/23 02:58:51.88 + Mar 7 02:58:51.897: INFO: Updating stateful set ss2 + Mar 7 02:58:51.901: INFO: Waiting for Pod statefulset-6328/ss2-1 to have revision ss2-5d8c6ff87d update revision ss2-6557876d87 + Mar 7 02:59:01.925: INFO: Updating stateful set ss2 + Mar 7 02:59:01.930: INFO: Waiting for StatefulSet statefulset-6328/ss2 to complete update + Mar 7 02:59:01.930: INFO: Waiting for Pod statefulset-6328/ss2-0 to have revision ss2-5d8c6ff87d update revision ss2-6557876d87 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 + Mar 7 02:59:11.936: INFO: Deleting all statefulset in ns statefulset-6328 + Mar 7 02:59:11.938: INFO: Scaling statefulset ss2 to 0 + Mar 7 02:59:21.985: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 02:59:21.987: INFO: Deleting statefulset ss2 + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 + Mar 7 02:59:22.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "statefulset-6328" for this suite. 03/07/23 02:59:22.03 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:59:22.035 +Mar 7 02:59:22.035: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename dns 03/07/23 02:59:22.036 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:59:22.052 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:59:22.054 +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 +STEP: Creating a test headless service 03/07/23 02:59:22.056 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7439 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7439;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7439 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7439;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7439.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7439.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7439.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7439.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7439.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7439.svc;check="$$(dig +notcp +noall +answer +search 55.241.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.241.55_udp@PTR;check="$$(dig +tcp +noall +answer +search 55.241.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.241.55_tcp@PTR;sleep 1; done + 03/07/23 02:59:22.077 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7439 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7439;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7439 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7439;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7439.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7439.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7439.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7439.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7439.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7439.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7439.svc;check="$$(dig +notcp +noall +answer +search 55.241.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.241.55_udp@PTR;check="$$(dig +tcp +noall +answer +search 55.241.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.241.55_tcp@PTR;sleep 1; done + 03/07/23 02:59:22.077 +STEP: creating a pod to probe DNS 03/07/23 02:59:22.077 +STEP: submitting the pod to kubernetes 03/07/23 02:59:22.077 +Mar 7 02:59:22.090: INFO: Waiting up to 15m0s for pod "dns-test-ad676870-0823-47ff-80d0-75f43491f391" in namespace "dns-7439" to be "running" +Mar 7 02:59:22.094: INFO: Pod "dns-test-ad676870-0823-47ff-80d0-75f43491f391": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103363ms +Mar 7 02:59:24.097: INFO: Pod "dns-test-ad676870-0823-47ff-80d0-75f43491f391": Phase="Running", Reason="", readiness=true. Elapsed: 2.007306367s +Mar 7 02:59:24.097: INFO: Pod "dns-test-ad676870-0823-47ff-80d0-75f43491f391" satisfied condition "running" +STEP: retrieving the pod 03/07/23 02:59:24.097 +STEP: looking for the results for each expected name from probers 03/07/23 02:59:24.1 +Mar 7 02:59:24.103: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.105: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.109: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.111: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.113: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.115: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.117: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.120: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.130: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.132: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.134: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.136: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.138: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.140: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.143: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.145: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:24.154: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + +Mar 7 02:59:29.157: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.161: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.170: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.172: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.174: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.179: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.193: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.196: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.198: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.208: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.211: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.213: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.216: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.218: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:29.228: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + +Mar 7 02:59:34.159: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.162: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.166: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.170: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.174: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.177: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.180: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.183: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.200: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.203: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.205: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.207: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.210: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.212: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.215: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.217: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:34.227: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + +Mar 7 02:59:39.158: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.161: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.165: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.168: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.173: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.176: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.178: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.190: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.193: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.196: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.198: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.201: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.203: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.206: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.208: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:39.218: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + +Mar 7 02:59:44.158: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.161: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.168: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.170: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.173: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.175: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.186: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.188: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.190: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.196: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.199: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.201: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.204: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.206: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:44.215: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + +Mar 7 02:59:49.158: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.160: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.165: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.167: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.169: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.172: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.174: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.187: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.189: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.191: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.193: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.196: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.198: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.200: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.203: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) +Mar 7 02:59:49.219: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + +Mar 7 02:59:54.218: INFO: DNS probes using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 succeeded + +STEP: deleting the pod 03/07/23 02:59:54.218 +STEP: deleting the test service 03/07/23 02:59:54.227 +STEP: deleting the test headless service 03/07/23 02:59:54.284 +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +Mar 7 02:59:54.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-7439" for this suite. 03/07/23 02:59:54.327 +{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","completed":82,"skipped":1472,"failed":0} +------------------------------ +• [SLOW TEST] [32.300 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:59:22.035 + Mar 7 02:59:22.035: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename dns 03/07/23 02:59:22.036 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:59:22.052 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:59:22.054 + [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 + STEP: Creating a test headless service 03/07/23 02:59:22.056 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7439 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7439;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7439 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7439;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7439.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7439.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7439.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7439.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7439.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7439.svc;check="$$(dig +notcp +noall +answer +search 55.241.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.241.55_udp@PTR;check="$$(dig +tcp +noall +answer +search 55.241.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.241.55_tcp@PTR;sleep 1; done + 03/07/23 02:59:22.077 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7439 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7439;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7439 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7439;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7439.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7439.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7439.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7439.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7439.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7439.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7439.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7439.svc;check="$$(dig +notcp +noall +answer +search 55.241.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.241.55_udp@PTR;check="$$(dig +tcp +noall +answer +search 55.241.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.241.55_tcp@PTR;sleep 1; done + 03/07/23 02:59:22.077 + STEP: creating a pod to probe DNS 03/07/23 02:59:22.077 + STEP: submitting the pod to kubernetes 03/07/23 02:59:22.077 + Mar 7 02:59:22.090: INFO: Waiting up to 15m0s for pod "dns-test-ad676870-0823-47ff-80d0-75f43491f391" in namespace "dns-7439" to be "running" + Mar 7 02:59:22.094: INFO: Pod "dns-test-ad676870-0823-47ff-80d0-75f43491f391": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103363ms + Mar 7 02:59:24.097: INFO: Pod "dns-test-ad676870-0823-47ff-80d0-75f43491f391": Phase="Running", Reason="", readiness=true. Elapsed: 2.007306367s + Mar 7 02:59:24.097: INFO: Pod "dns-test-ad676870-0823-47ff-80d0-75f43491f391" satisfied condition "running" + STEP: retrieving the pod 03/07/23 02:59:24.097 + STEP: looking for the results for each expected name from probers 03/07/23 02:59:24.1 + Mar 7 02:59:24.103: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.105: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.109: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.111: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.113: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.115: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.117: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.120: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.130: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.132: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.134: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.136: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.138: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.140: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.143: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.145: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:24.154: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + + Mar 7 02:59:29.157: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.161: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.170: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.172: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.174: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.179: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.193: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.196: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.198: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.208: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.211: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.213: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.216: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.218: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:29.228: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + + Mar 7 02:59:34.159: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.162: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.166: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.170: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.174: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.177: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.180: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.183: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.200: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.203: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.205: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.207: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.210: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.212: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.215: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.217: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:34.227: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + + Mar 7 02:59:39.158: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.161: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.165: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.168: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.173: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.176: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.178: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.190: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.193: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.196: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.198: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.201: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.203: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.206: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.208: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:39.218: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + + Mar 7 02:59:44.158: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.161: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.168: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.170: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.173: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.175: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.186: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.188: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.190: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.196: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.199: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.201: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.204: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.206: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:44.215: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + + Mar 7 02:59:49.158: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.160: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.165: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.167: INFO: Unable to read wheezy_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.169: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.172: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.174: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.187: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.189: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.191: INFO: Unable to read jessie_udp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.193: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439 from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.196: INFO: Unable to read jessie_udp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.198: INFO: Unable to read jessie_tcp@dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.200: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.203: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc from pod dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391: the server could not find the requested resource (get pods dns-test-ad676870-0823-47ff-80d0-75f43491f391) + Mar 7 02:59:49.219: INFO: Lookups using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7439 wheezy_tcp@dns-test-service.dns-7439 wheezy_udp@dns-test-service.dns-7439.svc wheezy_tcp@dns-test-service.dns-7439.svc wheezy_udp@_http._tcp.dns-test-service.dns-7439.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7439.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7439 jessie_tcp@dns-test-service.dns-7439 jessie_udp@dns-test-service.dns-7439.svc jessie_tcp@dns-test-service.dns-7439.svc jessie_udp@_http._tcp.dns-test-service.dns-7439.svc jessie_tcp@_http._tcp.dns-test-service.dns-7439.svc] + + Mar 7 02:59:54.218: INFO: DNS probes using dns-7439/dns-test-ad676870-0823-47ff-80d0-75f43491f391 succeeded + + STEP: deleting the pod 03/07/23 02:59:54.218 + STEP: deleting the test service 03/07/23 02:59:54.227 + STEP: deleting the test headless service 03/07/23 02:59:54.284 + [AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 + Mar 7 02:59:54.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "dns-7439" for this suite. 03/07/23 02:59:54.327 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:75 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:59:54.337 +Mar 7 02:59:54.337: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename svcaccounts 03/07/23 02:59:54.338 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:59:54.357 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:59:54.36 +[It] should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:75 +Mar 7 02:59:54.373: INFO: Waiting up to 5m0s for pod "pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a" in namespace "svcaccounts-8740" to be "running" +Mar 7 02:59:54.386: INFO: Pod "pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.524935ms +Mar 7 02:59:56.389: INFO: Pod "pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a": Phase="Running", Reason="", readiness=true. Elapsed: 2.016306269s +Mar 7 02:59:56.389: INFO: Pod "pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a" satisfied condition "running" +STEP: reading a file in the container 03/07/23 02:59:56.389 +Mar 7 02:59:56.389: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8740 pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container 03/07/23 02:59:56.572 +Mar 7 02:59:56.572: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8740 pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container 03/07/23 02:59:56.745 +Mar 7 02:59:56.745: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8740 pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +Mar 7 02:59:56.927: INFO: Got root ca configmap in namespace "svcaccounts-8740" +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +Mar 7 02:59:56.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-8740" for this suite. 03/07/23 02:59:56.932 +{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","completed":83,"skipped":1491,"failed":0} +------------------------------ +• [2.600 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:75 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:59:54.337 + Mar 7 02:59:54.337: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename svcaccounts 03/07/23 02:59:54.338 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:59:54.357 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:59:54.36 + [It] should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:75 + Mar 7 02:59:54.373: INFO: Waiting up to 5m0s for pod "pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a" in namespace "svcaccounts-8740" to be "running" + Mar 7 02:59:54.386: INFO: Pod "pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.524935ms + Mar 7 02:59:56.389: INFO: Pod "pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a": Phase="Running", Reason="", readiness=true. Elapsed: 2.016306269s + Mar 7 02:59:56.389: INFO: Pod "pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a" satisfied condition "running" + STEP: reading a file in the container 03/07/23 02:59:56.389 + Mar 7 02:59:56.389: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8740 pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' + STEP: reading a file in the container 03/07/23 02:59:56.572 + Mar 7 02:59:56.572: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8740 pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' + STEP: reading a file in the container 03/07/23 02:59:56.745 + Mar 7 02:59:56.745: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8740 pod-service-account-18f40aec-29b5-41b1-a88c-a796421e016a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' + Mar 7 02:59:56.927: INFO: Got root ca configmap in namespace "svcaccounts-8740" + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 + Mar 7 02:59:56.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "svcaccounts-8740" for this suite. 03/07/23 02:59:56.932 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:125 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 02:59:56.938 +Mar 7 02:59:56.938: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sched-preemption 03/07/23 02:59:56.938 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:59:56.95 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:59:56.952 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 +Mar 7 02:59:56.962: INFO: Waiting up to 1m0s for all nodes to be ready +Mar 7 03:00:56.998: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:125 +STEP: Create pods that use 4/5 of node resources. 03/07/23 03:00:57 +Mar 7 03:00:57.020: INFO: Created pod: pod0-0-sched-preemption-low-priority +Mar 7 03:00:57.028: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Mar 7 03:00:57.053: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Mar 7 03:00:57.057: INFO: Created pod: pod1-1-sched-preemption-medium-priority +Mar 7 03:00:57.075: INFO: Created pod: pod2-0-sched-preemption-medium-priority +Mar 7 03:00:57.080: INFO: Created pod: pod2-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. 03/07/23 03:00:57.08 +Mar 7 03:00:57.081: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-8490" to be "running" +Mar 7 03:00:57.085: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063927ms +Mar 7 03:00:59.088: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007650364s +Mar 7 03:01:01.089: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008574933s +Mar 7 03:01:03.090: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008777949s +Mar 7 03:01:05.090: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.00893851s +Mar 7 03:01:07.089: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008249188s +Mar 7 03:01:09.088: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 12.006987153s +Mar 7 03:01:09.088: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" +Mar 7 03:01:09.088: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-8490" to be "running" +Mar 7 03:01:09.090: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.269012ms +Mar 7 03:01:09.090: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" +Mar 7 03:01:09.090: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-8490" to be "running" +Mar 7 03:01:09.092: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.130736ms +Mar 7 03:01:09.092: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" +Mar 7 03:01:09.092: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-8490" to be "running" +Mar 7 03:01:09.096: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.028446ms +Mar 7 03:01:09.096: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" +Mar 7 03:01:09.096: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-8490" to be "running" +Mar 7 03:01:09.099: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.657197ms +Mar 7 03:01:09.099: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" +Mar 7 03:01:09.099: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-8490" to be "running" +Mar 7 03:01:09.101: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.231471ms +Mar 7 03:01:09.101: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" +STEP: Run a high priority pod that has same requirements as that of lower priority pod 03/07/23 03:01:09.101 +Mar 7 03:01:09.105: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-8490" to be "running" +Mar 7 03:01:09.108: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.196198ms +Mar 7 03:01:11.112: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006665207s +Mar 7 03:01:13.112: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.006865011s +Mar 7 03:01:13.112: INFO: Pod "preemptor-pod" satisfied condition "running" +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:01:13.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-8490" for this suite. 03/07/23 03:01:13.13 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","completed":84,"skipped":1508,"failed":0} +------------------------------ +• [SLOW TEST] [76.239 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:125 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 02:59:56.938 + Mar 7 02:59:56.938: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sched-preemption 03/07/23 02:59:56.938 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 02:59:56.95 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 02:59:56.952 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 + Mar 7 02:59:56.962: INFO: Waiting up to 1m0s for all nodes to be ready + Mar 7 03:00:56.998: INFO: Waiting for terminating namespaces to be deleted... + [It] validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:125 + STEP: Create pods that use 4/5 of node resources. 03/07/23 03:00:57 + Mar 7 03:00:57.020: INFO: Created pod: pod0-0-sched-preemption-low-priority + Mar 7 03:00:57.028: INFO: Created pod: pod0-1-sched-preemption-medium-priority + Mar 7 03:00:57.053: INFO: Created pod: pod1-0-sched-preemption-medium-priority + Mar 7 03:00:57.057: INFO: Created pod: pod1-1-sched-preemption-medium-priority + Mar 7 03:00:57.075: INFO: Created pod: pod2-0-sched-preemption-medium-priority + Mar 7 03:00:57.080: INFO: Created pod: pod2-1-sched-preemption-medium-priority + STEP: Wait for pods to be scheduled. 03/07/23 03:00:57.08 + Mar 7 03:00:57.081: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-8490" to be "running" + Mar 7 03:00:57.085: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063927ms + Mar 7 03:00:59.088: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007650364s + Mar 7 03:01:01.089: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008574933s + Mar 7 03:01:03.090: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008777949s + Mar 7 03:01:05.090: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.00893851s + Mar 7 03:01:07.089: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008249188s + Mar 7 03:01:09.088: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 12.006987153s + Mar 7 03:01:09.088: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" + Mar 7 03:01:09.088: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-8490" to be "running" + Mar 7 03:01:09.090: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.269012ms + Mar 7 03:01:09.090: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" + Mar 7 03:01:09.090: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-8490" to be "running" + Mar 7 03:01:09.092: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.130736ms + Mar 7 03:01:09.092: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" + Mar 7 03:01:09.092: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-8490" to be "running" + Mar 7 03:01:09.096: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 4.028446ms + Mar 7 03:01:09.096: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" + Mar 7 03:01:09.096: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-8490" to be "running" + Mar 7 03:01:09.099: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.657197ms + Mar 7 03:01:09.099: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" + Mar 7 03:01:09.099: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-8490" to be "running" + Mar 7 03:01:09.101: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.231471ms + Mar 7 03:01:09.101: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" + STEP: Run a high priority pod that has same requirements as that of lower priority pod 03/07/23 03:01:09.101 + Mar 7 03:01:09.105: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-8490" to be "running" + Mar 7 03:01:09.108: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.196198ms + Mar 7 03:01:11.112: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006665207s + Mar 7 03:01:13.112: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.006865011s + Mar 7 03:01:13.112: INFO: Pod "preemptor-pod" satisfied condition "running" + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:01:13.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sched-preemption-8490" for this suite. 03/07/23 03:01:13.13 + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:211 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:01:13.179 +Mar 7 03:01:13.179: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-probe 03/07/23 03:01:13.18 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:01:13.191 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:01:13.193 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:211 +STEP: Creating pod test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7 in namespace container-probe-875 03/07/23 03:01:13.195 +Mar 7 03:01:13.201: INFO: Waiting up to 5m0s for pod "test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7" in namespace "container-probe-875" to be "not pending" +Mar 7 03:01:13.203: INFO: Pod "test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284206ms +Mar 7 03:01:15.207: INFO: Pod "test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7": Phase="Running", Reason="", readiness=true. Elapsed: 2.005810129s +Mar 7 03:01:15.207: INFO: Pod "test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7" satisfied condition "not pending" +Mar 7 03:01:15.207: INFO: Started pod test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7 in namespace container-probe-875 +STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 03:01:15.207 +Mar 7 03:01:15.209: INFO: Initial restart count of pod test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7 is 0 +STEP: deleting the pod 03/07/23 03:05:15.719 +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +Mar 7 03:05:15.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-875" for this suite. 03/07/23 03:05:15.745 +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","completed":85,"skipped":1550,"failed":0} +------------------------------ +• [SLOW TEST] [242.573 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:211 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:01:13.179 + Mar 7 03:01:13.179: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-probe 03/07/23 03:01:13.18 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:01:13.191 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:01:13.193 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 + [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:211 + STEP: Creating pod test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7 in namespace container-probe-875 03/07/23 03:01:13.195 + Mar 7 03:01:13.201: INFO: Waiting up to 5m0s for pod "test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7" in namespace "container-probe-875" to be "not pending" + Mar 7 03:01:13.203: INFO: Pod "test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284206ms + Mar 7 03:01:15.207: INFO: Pod "test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7": Phase="Running", Reason="", readiness=true. Elapsed: 2.005810129s + Mar 7 03:01:15.207: INFO: Pod "test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7" satisfied condition "not pending" + Mar 7 03:01:15.207: INFO: Started pod test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7 in namespace container-probe-875 + STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 03:01:15.207 + Mar 7 03:01:15.209: INFO: Initial restart count of pod test-webserver-230d7b68-f862-4677-80f3-5296b18b85d7 is 0 + STEP: deleting the pod 03/07/23 03:05:15.719 + [AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 + Mar 7 03:05:15.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-probe-875" for this suite. 03/07/23 03:05:15.745 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:346 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:05:15.753 +Mar 7 03:05:15.753: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename security-context-test 03/07/23 03:05:15.754 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:05:15.774 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:05:15.776 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:49 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:346 +Mar 7 03:05:15.785: INFO: Waiting up to 5m0s for pod "busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5" in namespace "security-context-test-5136" to be "Succeeded or Failed" +Mar 7 03:05:15.787: INFO: Pod "busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168738ms +Mar 7 03:05:17.791: INFO: Pod "busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5": Phase="Running", Reason="", readiness=false. Elapsed: 2.005885537s +Mar 7 03:05:19.791: INFO: Pod "busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00548824s +Mar 7 03:05:19.791: INFO: Pod "busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +Mar 7 03:05:19.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-5136" for this suite. 03/07/23 03:05:19.794 +{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","completed":86,"skipped":1562,"failed":0} +------------------------------ +• [4.045 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a container with runAsUser + test/e2e/common/node/security_context.go:308 + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:346 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:05:15.753 + Mar 7 03:05:15.753: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename security-context-test 03/07/23 03:05:15.754 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:05:15.774 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:05:15.776 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:49 + [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:346 + Mar 7 03:05:15.785: INFO: Waiting up to 5m0s for pod "busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5" in namespace "security-context-test-5136" to be "Succeeded or Failed" + Mar 7 03:05:15.787: INFO: Pod "busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168738ms + Mar 7 03:05:17.791: INFO: Pod "busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5": Phase="Running", Reason="", readiness=false. Elapsed: 2.005885537s + Mar 7 03:05:19.791: INFO: Pod "busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00548824s + Mar 7 03:05:19.791: INFO: Pod "busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 + Mar 7 03:05:19.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "security-context-test-5136" for this suite. 03/07/23 03:05:19.794 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:83 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:05:19.798 +Mar 7 03:05:19.799: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:05:19.799 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:05:19.825 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:05:19.827 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:83 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:05:19.829 +Mar 7 03:05:19.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5" in namespace "downward-api-4519" to be "Succeeded or Failed" +Mar 7 03:05:19.838: INFO: Pod "downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.891898ms +Mar 7 03:05:21.842: INFO: Pod "downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006120281s +Mar 7 03:05:23.840: INFO: Pod "downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.004157006s +STEP: Saw pod success 03/07/23 03:05:23.84 +Mar 7 03:05:23.840: INFO: Pod "downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5" satisfied condition "Succeeded or Failed" +Mar 7 03:05:23.843: INFO: Trying to get logs from node node-2 pod downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5 container client-container: +STEP: delete the pod 03/07/23 03:05:23.856 +Mar 7 03:05:23.866: INFO: Waiting for pod downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5 to disappear +Mar 7 03:05:23.868: INFO: Pod downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 03:05:23.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4519" for this suite. 03/07/23 03:05:23.871 +{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","completed":87,"skipped":1565,"failed":0} +------------------------------ +• [4.077 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:83 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:05:19.798 + Mar 7 03:05:19.799: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:05:19.799 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:05:19.825 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:05:19.827 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:83 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:05:19.829 + Mar 7 03:05:19.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5" in namespace "downward-api-4519" to be "Succeeded or Failed" + Mar 7 03:05:19.838: INFO: Pod "downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.891898ms + Mar 7 03:05:21.842: INFO: Pod "downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006120281s + Mar 7 03:05:23.840: INFO: Pod "downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.004157006s + STEP: Saw pod success 03/07/23 03:05:23.84 + Mar 7 03:05:23.840: INFO: Pod "downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5" satisfied condition "Succeeded or Failed" + Mar 7 03:05:23.843: INFO: Trying to get logs from node node-2 pod downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5 container client-container: + STEP: delete the pod 03/07/23 03:05:23.856 + Mar 7 03:05:23.866: INFO: Waiting for pod downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5 to disappear + Mar 7 03:05:23.868: INFO: Pod downwardapi-volume-fecf863f-dbd3-4535-ab61-e22d1590d1f5 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 03:05:23.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-4519" for this suite. 03/07/23 03:05:23.871 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:699 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:05:23.877 +Mar 7 03:05:23.877: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sched-pred 03/07/23 03:05:23.878 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:05:23.888 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:05:23.89 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 +Mar 7 03:05:23.892: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Mar 7 03:05:23.898: INFO: Waiting for terminating namespaces to be deleted... +Mar 7 03:05:23.900: INFO: +Logging pods the apiserver thinks is on node bootstrap before test +Mar 7 03:05:23.914: INFO: apiserver-proxy-bootstrap from kube-system started at 2023-03-07 00:42:52 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container nginx ready: true, restart count 0 +Mar 7 03:05:23.914: INFO: backup-747d8c577b-wdcvl from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container backup ready: true, restart count 0 +Mar 7 03:05:23.914: INFO: backup-replication-wkdpp-lt4dt from kube-system started at 2023-03-07 00:47:50 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container backup-replication ready: false, restart count 0 +Mar 7 03:05:23.914: INFO: calico-kube-controllers-59685599d8-pvn74 from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container calico-kube-controllers ready: true, restart count 0 +Mar 7 03:05:23.914: INFO: calico-node-mlncm from kube-system started at 2023-03-07 02:23:53 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container calico-node ready: true, restart count 0 +Mar 7 03:05:23.914: INFO: coredns-5d7b997fcf-2j4jw from kube-system started at 2023-03-07 02:57:39 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container coredns ready: true, restart count 0 +Mar 7 03:05:23.914: INFO: etcd-bootstrap from kube-system started at 2023-03-07 00:43:13 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container etcd ready: true, restart count 0 +Mar 7 03:05:23.914: INFO: kube-apiserver-bootstrap from kube-system started at 2023-03-07 00:43:25 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 03:05:23.914: INFO: kube-controller-manager-bootstrap from kube-system started at 2023-03-07 00:43:33 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container kube-controller-manager ready: true, restart count 4 +Mar 7 03:05:23.914: INFO: kube-proxy-nlf5t from kube-system started at 2023-03-07 02:23:30 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 03:05:23.914: INFO: kube-scheduler-bootstrap from kube-system started at 2023-03-07 00:43:34 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container kube-scheduler ready: true, restart count 3 +Mar 7 03:05:23.914: INFO: metalk8s-operator-controller-manager-7d4764b947-crj2f from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container manager ready: true, restart count 5 +Mar 7 03:05:23.914: INFO: repositories-bootstrap from kube-system started at 2023-03-07 02:07:15 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container repositories ready: true, restart count 1 +Mar 7 03:05:23.914: INFO: salt-master-bootstrap from kube-system started at 2023-03-07 00:42:29 +0000 UTC (2 container statuses recorded) +Mar 7 03:05:23.914: INFO: Container salt-api ready: true, restart count 0 +Mar 7 03:05:23.914: INFO: Container salt-master ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: storage-operator-78f5dcc84f-jwnzl from kube-system started at 2023-03-07 00:45:28 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container manager ready: true, restart count 4 +Mar 7 03:05:23.915: INFO: dex-57f9db7c4-hbrhr from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container dex ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: dex-57f9db7c4-z6gh6 from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container dex ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: ingress-control-plane-managed-vip-n2qb6 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container keepalived ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: ingress-nginx-control-plane-controller-j9hsf from metalk8s-ingress started at 2023-03-07 00:45:27 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container controller ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: ingress-nginx-controller-vjnvw from metalk8s-ingress started at 2023-03-07 02:10:07 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container controller ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: ingress-nginx-defaultbackend-75c64bd745-65gwj from metalk8s-ingress started at 2023-03-07 00:45:24 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container ingress-nginx-default-backend ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: fluent-bit-dzhms from metalk8s-logging started at 2023-03-07 00:45:38 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: metalk8s-alert-logger-84f87c86d-hflm5 from metalk8s-monitoring started at 2023-03-07 00:45:09 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container metalk8s-alert-logger ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: prometheus-adapter-6696954b59-qrxtn from metalk8s-monitoring started at 2023-03-07 00:45:34 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container prometheus-adapter ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: prometheus-operator-kube-state-metrics-f7d5dc499-t4szw from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container kube-state-metrics ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: prometheus-operator-operator-864bc5b5d-8m6lq from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container prometheus-operator ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: prometheus-operator-prometheus-node-exporter-sl4bq from metalk8s-monitoring started at 2023-03-07 00:45:18 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: thanos-query-6b9dc579dd-ctlrl from metalk8s-monitoring started at 2023-03-07 00:45:22 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container thanos-query ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: metalk8s-ui-766c8b96cd-8cxcs from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container metalk8s-ui ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: metalk8s-ui-766c8b96cd-tsx5v from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container metalk8s-ui ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:05:23.915: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: Container systemd-logs ready: true, restart count 0 +Mar 7 03:05:23.915: INFO: +Logging pods the apiserver thinks is on node node-1 before test +Mar 7 03:05:23.929: INFO: apiserver-proxy-node-1 from kube-system started at 2023-03-07 00:58:52 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container nginx ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: calico-node-fvlp2 from kube-system started at 2023-03-07 02:23:42 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container calico-node ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: coredns-5d7b997fcf-z25jb from kube-system started at 2023-03-07 02:09:04 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container coredns ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: etcd-node-1 from kube-system started at 2023-03-07 00:59:16 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container etcd ready: true, restart count 1 +Mar 7 03:05:23.929: INFO: kube-apiserver-node-1 from kube-system started at 2023-03-07 01:00:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: kube-controller-manager-node-1 from kube-system started at 2023-03-07 01:00:17 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container kube-controller-manager ready: true, restart count 2 +Mar 7 03:05:23.929: INFO: kube-proxy-vpgsc from kube-system started at 2023-03-07 02:23:27 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: kube-scheduler-node-1 from kube-system started at 2023-03-07 01:00:18 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container kube-scheduler ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: ingress-control-plane-managed-vip-w2cb9 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container keepalived ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: ingress-nginx-control-plane-controller-ck4wk from metalk8s-ingress started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container controller ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: ingress-nginx-controller-9b2bj from metalk8s-ingress started at 2023-03-07 02:10:40 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container controller ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: fluent-bit-4nw7s from metalk8s-logging started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: loki-0 from metalk8s-logging started at 2023-03-07 01:11:45 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container single-binary ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: alertmanager-prometheus-operator-alertmanager-0 from metalk8s-monitoring started at 2023-03-07 01:11:00 +0000 UTC (2 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container alertmanager ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: Container config-reloader ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: prometheus-operator-grafana-74d86d5965-nj6pq from metalk8s-monitoring started at 2023-03-07 02:57:39 +0000 UTC (3 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container grafana ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: Container grafana-sc-dashboard ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: Container grafana-sc-datasources ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: prometheus-operator-prometheus-node-exporter-4plkr from metalk8s-monitoring started at 2023-03-07 00:58:56 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: prometheus-prometheus-operator-prometheus-0 from metalk8s-monitoring started at 2023-03-07 01:11:10 +0000 UTC (3 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container config-reloader ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: Container prometheus ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: Container thanos-sidecar ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-gktsb from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:05:23.929: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: Container systemd-logs ready: true, restart count 0 +Mar 7 03:05:23.929: INFO: +Logging pods the apiserver thinks is on node node-2 before test +Mar 7 03:05:23.938: INFO: apiserver-proxy-node-2 from kube-system started at 2023-03-07 01:07:13 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container nginx ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: calico-node-r7qqp from kube-system started at 2023-03-07 02:23:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container calico-node ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: etcd-node-2 from kube-system started at 2023-03-07 01:08:10 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container etcd ready: true, restart count 2 +Mar 7 03:05:23.938: INFO: kube-apiserver-node-2 from kube-system started at 2023-03-07 01:09:12 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: kube-controller-manager-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container kube-controller-manager ready: true, restart count 1 +Mar 7 03:05:23.938: INFO: kube-proxy-wsc86 from kube-system started at 2023-03-07 02:23:33 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: kube-scheduler-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container kube-scheduler ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: ingress-control-plane-managed-vip-qxwrw from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container keepalived ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: ingress-nginx-control-plane-controller-crbv2 from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container controller ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: ingress-nginx-controller-bcd78 from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container controller ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: fluent-bit-tn4sc from metalk8s-logging started at 2023-03-07 02:58:10 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: prometheus-operator-prometheus-node-exporter-x9hfs from metalk8s-monitoring started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5 from security-context-test-5136 started at 2023-03-07 03:05:15 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5 ready: false, restart count 0 +Mar 7 03:05:23.938: INFO: sonobuoy from sonobuoy started at 2023-03-07 02:24:57 +0000 UTC (1 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container kube-sonobuoy ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: sonobuoy-e2e-job-441ced38a9a5443b from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container e2e ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-hbmvq from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:05:23.938: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:05:23.938: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:699 +STEP: Trying to launch a pod without a label to get a node which can launch it. 03/07/23 03:05:23.938 +Mar 7 03:05:23.944: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-5074" to be "running" +Mar 7 03:05:23.949: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570869ms +Mar 7 03:05:25.952: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007587541s +Mar 7 03:05:25.952: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 03/07/23 03:05:25.954 +STEP: Trying to apply a random label on the found node. 03/07/23 03:05:25.966 +STEP: verifying the node has the label kubernetes.io/e2e-5ff7da41-cebc-4430-84f5-a8a399faf439 95 03/07/23 03:05:25.978 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 03/07/23 03:05:25.981 +Mar 7 03:05:25.986: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-5074" to be "not pending" +Mar 7 03:05:25.997: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.631022ms +Mar 7 03:05:28.001: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 2.015236144s +Mar 7 03:05:28.001: INFO: Pod "pod4" satisfied condition "not pending" +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 192.168.1.102 on the node which pod4 resides and expect not scheduled 03/07/23 03:05:28.001 +Mar 7 03:05:28.005: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-5074" to be "not pending" +Mar 7 03:05:28.011: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.621288ms +Mar 7 03:05:30.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009233945s +Mar 7 03:05:32.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010381726s +Mar 7 03:05:34.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009156423s +Mar 7 03:05:36.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009630735s +Mar 7 03:05:38.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009520682s +Mar 7 03:05:40.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.009705362s +Mar 7 03:05:42.018: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.012190364s +Mar 7 03:05:44.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.009235311s +Mar 7 03:05:46.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.00931938s +Mar 7 03:05:48.017: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.010945936s +Mar 7 03:05:50.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.008996073s +Mar 7 03:05:52.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.009385474s +Mar 7 03:05:54.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.009720617s +Mar 7 03:05:56.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.008621925s +Mar 7 03:05:58.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.010151223s +Mar 7 03:06:00.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.009855311s +Mar 7 03:06:02.019: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.013903972s +Mar 7 03:06:04.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.00947765s +Mar 7 03:06:06.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.009362086s +Mar 7 03:06:08.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.009286912s +Mar 7 03:06:10.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.008891593s +Mar 7 03:06:12.020: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.014270537s +Mar 7 03:06:14.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.008558973s +Mar 7 03:06:16.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.010624888s +Mar 7 03:06:18.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.009599552s +Mar 7 03:06:20.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.009626577s +Mar 7 03:06:22.022: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.016266647s +Mar 7 03:06:24.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.009598472s +Mar 7 03:06:26.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.008758755s +Mar 7 03:06:28.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.009571016s +Mar 7 03:06:30.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.008491763s +Mar 7 03:06:32.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.010549321s +Mar 7 03:06:34.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.009562134s +Mar 7 03:06:36.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.009370869s +Mar 7 03:06:38.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.010294776s +Mar 7 03:06:40.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.009820926s +Mar 7 03:06:42.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.01082929s +Mar 7 03:06:44.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.008671673s +Mar 7 03:06:46.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.010311707s +Mar 7 03:06:48.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.009818078s +Mar 7 03:06:50.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.009807814s +Mar 7 03:06:52.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.010460399s +Mar 7 03:06:54.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.008951289s +Mar 7 03:06:56.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.008850247s +Mar 7 03:06:58.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.009471121s +Mar 7 03:07:00.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.008979337s +Mar 7 03:07:02.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.01050326s +Mar 7 03:07:04.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.008138713s +Mar 7 03:07:06.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.009290666s +Mar 7 03:07:08.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.010639858s +Mar 7 03:07:10.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.009017657s +Mar 7 03:07:12.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.009625723s +Mar 7 03:07:14.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.009251306s +Mar 7 03:07:16.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.009821084s +Mar 7 03:07:18.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.009162647s +Mar 7 03:07:20.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.009580834s +Mar 7 03:07:22.019: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.013222233s +Mar 7 03:07:24.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.009489937s +Mar 7 03:07:26.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.010209305s +Mar 7 03:07:28.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.010186933s +Mar 7 03:07:30.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.00972277s +Mar 7 03:07:32.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.010288629s +Mar 7 03:07:34.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.009015235s +Mar 7 03:07:36.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.009300359s +Mar 7 03:07:38.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.010077814s +Mar 7 03:07:40.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.009095269s +Mar 7 03:07:42.020: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.01445104s +Mar 7 03:07:44.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.009829437s +Mar 7 03:07:46.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.00931247s +Mar 7 03:07:48.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.008610118s +Mar 7 03:07:50.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.009469472s +Mar 7 03:07:52.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.009387031s +Mar 7 03:07:54.018: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.012101344s +Mar 7 03:07:56.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.009925378s +Mar 7 03:07:58.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.009646689s +Mar 7 03:08:00.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.009103863s +Mar 7 03:08:02.021: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.015225673s +Mar 7 03:08:04.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.009811517s +Mar 7 03:08:06.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.009276765s +Mar 7 03:08:08.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.010475005s +Mar 7 03:08:10.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.008872478s +Mar 7 03:08:12.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.009660644s +Mar 7 03:08:14.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.008574015s +Mar 7 03:08:16.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.009914514s +Mar 7 03:08:18.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.008918318s +Mar 7 03:08:20.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.009658945s +Mar 7 03:08:22.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.009763637s +Mar 7 03:08:24.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.008585784s +Mar 7 03:08:26.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.00857363s +Mar 7 03:08:28.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.009583883s +Mar 7 03:08:30.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.008731178s +Mar 7 03:08:32.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.009691813s +Mar 7 03:08:34.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.00830548s +Mar 7 03:08:36.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.009111881s +Mar 7 03:08:38.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.009499121s +Mar 7 03:08:40.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.00862774s +Mar 7 03:08:42.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.010544908s +Mar 7 03:08:44.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.009569139s +Mar 7 03:08:46.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.010372984s +Mar 7 03:08:48.017: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.011240218s +Mar 7 03:08:50.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.009666424s +Mar 7 03:08:52.020: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.014652521s +Mar 7 03:08:54.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.009357138s +Mar 7 03:08:56.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.009168595s +Mar 7 03:08:58.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.009689007s +Mar 7 03:09:00.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.008897389s +Mar 7 03:09:02.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.010902929s +Mar 7 03:09:04.022: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.016585352s +Mar 7 03:09:06.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.00926104s +Mar 7 03:09:08.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.010631181s +Mar 7 03:09:10.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.009732188s +Mar 7 03:09:12.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.010152137s +Mar 7 03:09:14.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.008452784s +Mar 7 03:09:16.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.009592783s +Mar 7 03:09:18.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.010036552s +Mar 7 03:09:20.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.009167715s +Mar 7 03:09:22.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.0095213s +Mar 7 03:09:24.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.010002308s +Mar 7 03:09:26.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.009437441s +Mar 7 03:09:28.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.008945083s +Mar 7 03:09:30.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.00889321s +Mar 7 03:09:32.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.008893409s +Mar 7 03:09:34.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.009662317s +Mar 7 03:09:36.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.010611708s +Mar 7 03:09:38.017: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.011045582s +Mar 7 03:09:40.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.008971017s +Mar 7 03:09:42.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.008718134s +Mar 7 03:09:44.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.008685569s +Mar 7 03:09:46.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.009821536s +Mar 7 03:09:48.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.009961706s +Mar 7 03:09:50.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.009608211s +Mar 7 03:09:52.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.009828972s +Mar 7 03:09:54.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.009158059s +Mar 7 03:09:56.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.009102117s +Mar 7 03:09:58.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.01027223s +Mar 7 03:10:00.021: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.015296661s +Mar 7 03:10:02.025: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.01985982s +Mar 7 03:10:04.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.008859394s +Mar 7 03:10:06.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.009092945s +Mar 7 03:10:08.017: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.011324211s +Mar 7 03:10:10.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.008796759s +Mar 7 03:10:12.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.009180126s +Mar 7 03:10:14.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.009580923s +Mar 7 03:10:16.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.010458912s +Mar 7 03:10:18.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.00895907s +Mar 7 03:10:20.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.008813185s +Mar 7 03:10:22.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.009724093s +Mar 7 03:10:24.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.008599648s +Mar 7 03:10:26.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.010694689s +Mar 7 03:10:28.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.010863318s +Mar 7 03:10:28.019: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.012958455s +STEP: removing the label kubernetes.io/e2e-5ff7da41-cebc-4430-84f5-a8a399faf439 off the node node-2 03/07/23 03:10:28.019 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-5ff7da41-cebc-4430-84f5-a8a399faf439 03/07/23 03:10:28.029 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:10:28.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-5074" for this suite. 03/07/23 03:10:28.037 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","completed":88,"skipped":1599,"failed":0} +------------------------------ +• [SLOW TEST] [304.171 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:699 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:05:23.877 + Mar 7 03:05:23.877: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sched-pred 03/07/23 03:05:23.878 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:05:23.888 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:05:23.89 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 + Mar 7 03:05:23.892: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Mar 7 03:05:23.898: INFO: Waiting for terminating namespaces to be deleted... + Mar 7 03:05:23.900: INFO: + Logging pods the apiserver thinks is on node bootstrap before test + Mar 7 03:05:23.914: INFO: apiserver-proxy-bootstrap from kube-system started at 2023-03-07 00:42:52 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container nginx ready: true, restart count 0 + Mar 7 03:05:23.914: INFO: backup-747d8c577b-wdcvl from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container backup ready: true, restart count 0 + Mar 7 03:05:23.914: INFO: backup-replication-wkdpp-lt4dt from kube-system started at 2023-03-07 00:47:50 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container backup-replication ready: false, restart count 0 + Mar 7 03:05:23.914: INFO: calico-kube-controllers-59685599d8-pvn74 from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container calico-kube-controllers ready: true, restart count 0 + Mar 7 03:05:23.914: INFO: calico-node-mlncm from kube-system started at 2023-03-07 02:23:53 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container calico-node ready: true, restart count 0 + Mar 7 03:05:23.914: INFO: coredns-5d7b997fcf-2j4jw from kube-system started at 2023-03-07 02:57:39 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container coredns ready: true, restart count 0 + Mar 7 03:05:23.914: INFO: etcd-bootstrap from kube-system started at 2023-03-07 00:43:13 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container etcd ready: true, restart count 0 + Mar 7 03:05:23.914: INFO: kube-apiserver-bootstrap from kube-system started at 2023-03-07 00:43:25 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 03:05:23.914: INFO: kube-controller-manager-bootstrap from kube-system started at 2023-03-07 00:43:33 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container kube-controller-manager ready: true, restart count 4 + Mar 7 03:05:23.914: INFO: kube-proxy-nlf5t from kube-system started at 2023-03-07 02:23:30 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 03:05:23.914: INFO: kube-scheduler-bootstrap from kube-system started at 2023-03-07 00:43:34 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container kube-scheduler ready: true, restart count 3 + Mar 7 03:05:23.914: INFO: metalk8s-operator-controller-manager-7d4764b947-crj2f from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container manager ready: true, restart count 5 + Mar 7 03:05:23.914: INFO: repositories-bootstrap from kube-system started at 2023-03-07 02:07:15 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container repositories ready: true, restart count 1 + Mar 7 03:05:23.914: INFO: salt-master-bootstrap from kube-system started at 2023-03-07 00:42:29 +0000 UTC (2 container statuses recorded) + Mar 7 03:05:23.914: INFO: Container salt-api ready: true, restart count 0 + Mar 7 03:05:23.914: INFO: Container salt-master ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: storage-operator-78f5dcc84f-jwnzl from kube-system started at 2023-03-07 00:45:28 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container manager ready: true, restart count 4 + Mar 7 03:05:23.915: INFO: dex-57f9db7c4-hbrhr from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container dex ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: dex-57f9db7c4-z6gh6 from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container dex ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: ingress-control-plane-managed-vip-n2qb6 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container keepalived ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: ingress-nginx-control-plane-controller-j9hsf from metalk8s-ingress started at 2023-03-07 00:45:27 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container controller ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: ingress-nginx-controller-vjnvw from metalk8s-ingress started at 2023-03-07 02:10:07 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container controller ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: ingress-nginx-defaultbackend-75c64bd745-65gwj from metalk8s-ingress started at 2023-03-07 00:45:24 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container ingress-nginx-default-backend ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: fluent-bit-dzhms from metalk8s-logging started at 2023-03-07 00:45:38 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: metalk8s-alert-logger-84f87c86d-hflm5 from metalk8s-monitoring started at 2023-03-07 00:45:09 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container metalk8s-alert-logger ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: prometheus-adapter-6696954b59-qrxtn from metalk8s-monitoring started at 2023-03-07 00:45:34 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container prometheus-adapter ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: prometheus-operator-kube-state-metrics-f7d5dc499-t4szw from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container kube-state-metrics ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: prometheus-operator-operator-864bc5b5d-8m6lq from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container prometheus-operator ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: prometheus-operator-prometheus-node-exporter-sl4bq from metalk8s-monitoring started at 2023-03-07 00:45:18 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: thanos-query-6b9dc579dd-ctlrl from metalk8s-monitoring started at 2023-03-07 00:45:22 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container thanos-query ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: metalk8s-ui-766c8b96cd-8cxcs from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container metalk8s-ui ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: metalk8s-ui-766c8b96cd-tsx5v from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container metalk8s-ui ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:05:23.915: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: Container systemd-logs ready: true, restart count 0 + Mar 7 03:05:23.915: INFO: + Logging pods the apiserver thinks is on node node-1 before test + Mar 7 03:05:23.929: INFO: apiserver-proxy-node-1 from kube-system started at 2023-03-07 00:58:52 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container nginx ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: calico-node-fvlp2 from kube-system started at 2023-03-07 02:23:42 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container calico-node ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: coredns-5d7b997fcf-z25jb from kube-system started at 2023-03-07 02:09:04 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container coredns ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: etcd-node-1 from kube-system started at 2023-03-07 00:59:16 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container etcd ready: true, restart count 1 + Mar 7 03:05:23.929: INFO: kube-apiserver-node-1 from kube-system started at 2023-03-07 01:00:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: kube-controller-manager-node-1 from kube-system started at 2023-03-07 01:00:17 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container kube-controller-manager ready: true, restart count 2 + Mar 7 03:05:23.929: INFO: kube-proxy-vpgsc from kube-system started at 2023-03-07 02:23:27 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: kube-scheduler-node-1 from kube-system started at 2023-03-07 01:00:18 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container kube-scheduler ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: ingress-control-plane-managed-vip-w2cb9 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container keepalived ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: ingress-nginx-control-plane-controller-ck4wk from metalk8s-ingress started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container controller ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: ingress-nginx-controller-9b2bj from metalk8s-ingress started at 2023-03-07 02:10:40 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container controller ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: fluent-bit-4nw7s from metalk8s-logging started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: loki-0 from metalk8s-logging started at 2023-03-07 01:11:45 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container single-binary ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: alertmanager-prometheus-operator-alertmanager-0 from metalk8s-monitoring started at 2023-03-07 01:11:00 +0000 UTC (2 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container alertmanager ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: Container config-reloader ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: prometheus-operator-grafana-74d86d5965-nj6pq from metalk8s-monitoring started at 2023-03-07 02:57:39 +0000 UTC (3 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container grafana ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: Container grafana-sc-dashboard ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: Container grafana-sc-datasources ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: prometheus-operator-prometheus-node-exporter-4plkr from metalk8s-monitoring started at 2023-03-07 00:58:56 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: prometheus-prometheus-operator-prometheus-0 from metalk8s-monitoring started at 2023-03-07 01:11:10 +0000 UTC (3 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container config-reloader ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: Container prometheus ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: Container thanos-sidecar ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-gktsb from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:05:23.929: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: Container systemd-logs ready: true, restart count 0 + Mar 7 03:05:23.929: INFO: + Logging pods the apiserver thinks is on node node-2 before test + Mar 7 03:05:23.938: INFO: apiserver-proxy-node-2 from kube-system started at 2023-03-07 01:07:13 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container nginx ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: calico-node-r7qqp from kube-system started at 2023-03-07 02:23:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container calico-node ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: etcd-node-2 from kube-system started at 2023-03-07 01:08:10 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container etcd ready: true, restart count 2 + Mar 7 03:05:23.938: INFO: kube-apiserver-node-2 from kube-system started at 2023-03-07 01:09:12 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: kube-controller-manager-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container kube-controller-manager ready: true, restart count 1 + Mar 7 03:05:23.938: INFO: kube-proxy-wsc86 from kube-system started at 2023-03-07 02:23:33 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: kube-scheduler-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container kube-scheduler ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: ingress-control-plane-managed-vip-qxwrw from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container keepalived ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: ingress-nginx-control-plane-controller-crbv2 from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container controller ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: ingress-nginx-controller-bcd78 from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container controller ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: fluent-bit-tn4sc from metalk8s-logging started at 2023-03-07 02:58:10 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: prometheus-operator-prometheus-node-exporter-x9hfs from metalk8s-monitoring started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5 from security-context-test-5136 started at 2023-03-07 03:05:15 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container busybox-user-65534-df1b011f-f1b9-4b6c-a84b-ea212028cdf5 ready: false, restart count 0 + Mar 7 03:05:23.938: INFO: sonobuoy from sonobuoy started at 2023-03-07 02:24:57 +0000 UTC (1 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container kube-sonobuoy ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: sonobuoy-e2e-job-441ced38a9a5443b from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container e2e ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-hbmvq from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:05:23.938: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:05:23.938: INFO: Container systemd-logs ready: true, restart count 0 + [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:699 + STEP: Trying to launch a pod without a label to get a node which can launch it. 03/07/23 03:05:23.938 + Mar 7 03:05:23.944: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-5074" to be "running" + Mar 7 03:05:23.949: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570869ms + Mar 7 03:05:25.952: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007587541s + Mar 7 03:05:25.952: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 03/07/23 03:05:25.954 + STEP: Trying to apply a random label on the found node. 03/07/23 03:05:25.966 + STEP: verifying the node has the label kubernetes.io/e2e-5ff7da41-cebc-4430-84f5-a8a399faf439 95 03/07/23 03:05:25.978 + STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 03/07/23 03:05:25.981 + Mar 7 03:05:25.986: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-5074" to be "not pending" + Mar 7 03:05:25.997: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.631022ms + Mar 7 03:05:28.001: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 2.015236144s + Mar 7 03:05:28.001: INFO: Pod "pod4" satisfied condition "not pending" + STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 192.168.1.102 on the node which pod4 resides and expect not scheduled 03/07/23 03:05:28.001 + Mar 7 03:05:28.005: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-5074" to be "not pending" + Mar 7 03:05:28.011: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.621288ms + Mar 7 03:05:30.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009233945s + Mar 7 03:05:32.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010381726s + Mar 7 03:05:34.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009156423s + Mar 7 03:05:36.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009630735s + Mar 7 03:05:38.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009520682s + Mar 7 03:05:40.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.009705362s + Mar 7 03:05:42.018: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.012190364s + Mar 7 03:05:44.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.009235311s + Mar 7 03:05:46.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.00931938s + Mar 7 03:05:48.017: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.010945936s + Mar 7 03:05:50.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.008996073s + Mar 7 03:05:52.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.009385474s + Mar 7 03:05:54.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.009720617s + Mar 7 03:05:56.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.008621925s + Mar 7 03:05:58.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.010151223s + Mar 7 03:06:00.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.009855311s + Mar 7 03:06:02.019: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.013903972s + Mar 7 03:06:04.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.00947765s + Mar 7 03:06:06.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.009362086s + Mar 7 03:06:08.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.009286912s + Mar 7 03:06:10.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.008891593s + Mar 7 03:06:12.020: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.014270537s + Mar 7 03:06:14.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.008558973s + Mar 7 03:06:16.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.010624888s + Mar 7 03:06:18.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.009599552s + Mar 7 03:06:20.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.009626577s + Mar 7 03:06:22.022: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.016266647s + Mar 7 03:06:24.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.009598472s + Mar 7 03:06:26.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.008758755s + Mar 7 03:06:28.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.009571016s + Mar 7 03:06:30.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.008491763s + Mar 7 03:06:32.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.010549321s + Mar 7 03:06:34.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.009562134s + Mar 7 03:06:36.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.009370869s + Mar 7 03:06:38.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.010294776s + Mar 7 03:06:40.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.009820926s + Mar 7 03:06:42.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.01082929s + Mar 7 03:06:44.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.008671673s + Mar 7 03:06:46.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.010311707s + Mar 7 03:06:48.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.009818078s + Mar 7 03:06:50.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.009807814s + Mar 7 03:06:52.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.010460399s + Mar 7 03:06:54.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.008951289s + Mar 7 03:06:56.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.008850247s + Mar 7 03:06:58.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.009471121s + Mar 7 03:07:00.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.008979337s + Mar 7 03:07:02.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.01050326s + Mar 7 03:07:04.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.008138713s + Mar 7 03:07:06.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.009290666s + Mar 7 03:07:08.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.010639858s + Mar 7 03:07:10.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.009017657s + Mar 7 03:07:12.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.009625723s + Mar 7 03:07:14.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.009251306s + Mar 7 03:07:16.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.009821084s + Mar 7 03:07:18.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.009162647s + Mar 7 03:07:20.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.009580834s + Mar 7 03:07:22.019: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.013222233s + Mar 7 03:07:24.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.009489937s + Mar 7 03:07:26.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.010209305s + Mar 7 03:07:28.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.010186933s + Mar 7 03:07:30.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.00972277s + Mar 7 03:07:32.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.010288629s + Mar 7 03:07:34.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.009015235s + Mar 7 03:07:36.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.009300359s + Mar 7 03:07:38.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.010077814s + Mar 7 03:07:40.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.009095269s + Mar 7 03:07:42.020: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.01445104s + Mar 7 03:07:44.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.009829437s + Mar 7 03:07:46.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.00931247s + Mar 7 03:07:48.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.008610118s + Mar 7 03:07:50.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.009469472s + Mar 7 03:07:52.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.009387031s + Mar 7 03:07:54.018: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.012101344s + Mar 7 03:07:56.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.009925378s + Mar 7 03:07:58.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.009646689s + Mar 7 03:08:00.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.009103863s + Mar 7 03:08:02.021: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.015225673s + Mar 7 03:08:04.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.009811517s + Mar 7 03:08:06.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.009276765s + Mar 7 03:08:08.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.010475005s + Mar 7 03:08:10.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.008872478s + Mar 7 03:08:12.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.009660644s + Mar 7 03:08:14.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.008574015s + Mar 7 03:08:16.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.009914514s + Mar 7 03:08:18.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.008918318s + Mar 7 03:08:20.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.009658945s + Mar 7 03:08:22.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.009763637s + Mar 7 03:08:24.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.008585784s + Mar 7 03:08:26.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.00857363s + Mar 7 03:08:28.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.009583883s + Mar 7 03:08:30.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.008731178s + Mar 7 03:08:32.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.009691813s + Mar 7 03:08:34.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.00830548s + Mar 7 03:08:36.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.009111881s + Mar 7 03:08:38.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.009499121s + Mar 7 03:08:40.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.00862774s + Mar 7 03:08:42.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.010544908s + Mar 7 03:08:44.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.009569139s + Mar 7 03:08:46.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.010372984s + Mar 7 03:08:48.017: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.011240218s + Mar 7 03:08:50.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.009666424s + Mar 7 03:08:52.020: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.014652521s + Mar 7 03:08:54.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.009357138s + Mar 7 03:08:56.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.009168595s + Mar 7 03:08:58.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.009689007s + Mar 7 03:09:00.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.008897389s + Mar 7 03:09:02.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.010902929s + Mar 7 03:09:04.022: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.016585352s + Mar 7 03:09:06.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.00926104s + Mar 7 03:09:08.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.010631181s + Mar 7 03:09:10.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.009732188s + Mar 7 03:09:12.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.010152137s + Mar 7 03:09:14.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.008452784s + Mar 7 03:09:16.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.009592783s + Mar 7 03:09:18.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.010036552s + Mar 7 03:09:20.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.009167715s + Mar 7 03:09:22.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.0095213s + Mar 7 03:09:24.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.010002308s + Mar 7 03:09:26.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.009437441s + Mar 7 03:09:28.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.008945083s + Mar 7 03:09:30.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.00889321s + Mar 7 03:09:32.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.008893409s + Mar 7 03:09:34.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.009662317s + Mar 7 03:09:36.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.010611708s + Mar 7 03:09:38.017: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.011045582s + Mar 7 03:09:40.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.008971017s + Mar 7 03:09:42.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.008718134s + Mar 7 03:09:44.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.008685569s + Mar 7 03:09:46.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.009821536s + Mar 7 03:09:48.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.009961706s + Mar 7 03:09:50.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.009608211s + Mar 7 03:09:52.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.009828972s + Mar 7 03:09:54.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.009158059s + Mar 7 03:09:56.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.009102117s + Mar 7 03:09:58.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.01027223s + Mar 7 03:10:00.021: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.015296661s + Mar 7 03:10:02.025: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.01985982s + Mar 7 03:10:04.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.008859394s + Mar 7 03:10:06.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.009092945s + Mar 7 03:10:08.017: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.011324211s + Mar 7 03:10:10.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.008796759s + Mar 7 03:10:12.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.009180126s + Mar 7 03:10:14.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.009580923s + Mar 7 03:10:16.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.010458912s + Mar 7 03:10:18.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.00895907s + Mar 7 03:10:20.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.008813185s + Mar 7 03:10:22.015: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.009724093s + Mar 7 03:10:24.014: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.008599648s + Mar 7 03:10:26.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.010694689s + Mar 7 03:10:28.016: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.010863318s + Mar 7 03:10:28.019: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.012958455s + STEP: removing the label kubernetes.io/e2e-5ff7da41-cebc-4430-84f5-a8a399faf439 off the node node-2 03/07/23 03:10:28.019 + STEP: verifying the node doesn't have the label kubernetes.io/e2e-5ff7da41-cebc-4430-84f5-a8a399faf439 03/07/23 03:10:28.029 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:10:28.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sched-pred-5074" for this suite. 03/07/23 03:10:28.037 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:10:28.049 +Mar 7 03:10:28.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename gc 03/07/23 03:10:28.049 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:10:28.068 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:10:28.071 +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 +STEP: create the rc 03/07/23 03:10:28.076 +STEP: delete the rc 03/07/23 03:10:33.096 +STEP: wait for the rc to be deleted 03/07/23 03:10:33.153 +Mar 7 03:10:34.338: INFO: 87 pods remaining +Mar 7 03:10:34.338: INFO: 84 pods has nil DeletionTimestamp +Mar 7 03:10:34.338: INFO: +Mar 7 03:10:35.380: INFO: 74 pods remaining +Mar 7 03:10:35.380: INFO: 73 pods has nil DeletionTimestamp +Mar 7 03:10:35.380: INFO: +Mar 7 03:10:36.488: INFO: 60 pods remaining +Mar 7 03:10:36.488: INFO: 60 pods has nil DeletionTimestamp +Mar 7 03:10:36.488: INFO: +Mar 7 03:10:37.171: INFO: 46 pods remaining +Mar 7 03:10:37.171: INFO: 46 pods has nil DeletionTimestamp +Mar 7 03:10:37.171: INFO: +Mar 7 03:10:38.298: INFO: 36 pods remaining +Mar 7 03:10:38.298: INFO: 33 pods has nil DeletionTimestamp +Mar 7 03:10:38.298: INFO: +Mar 7 03:10:39.171: INFO: 20 pods remaining +Mar 7 03:10:39.171: INFO: 20 pods has nil DeletionTimestamp +Mar 7 03:10:39.171: INFO: +Mar 7 03:10:40.192: INFO: 1 pods remaining +Mar 7 03:10:40.192: INFO: 1 pods has nil DeletionTimestamp +Mar 7 03:10:40.192: INFO: +STEP: Gathering metrics 03/07/23 03:10:41.169 +Mar 7 03:10:41.506: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" +Mar 7 03:10:41.510: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 3.640169ms +Mar 7 03:10:41.510: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) +Mar 7 03:10:41.510: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" +E0307 03:10:41.921468 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:41.921468 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:42.963990 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:42.963990 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:43.995832 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:43.995832 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:45.016219 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:45.016219 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:46.036502 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:46.036502 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:48.086343 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:48.086343 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:50.142359 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:50.142359 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:51.162989 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:51.162989 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:53.225296 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:53.225296 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:54.245038 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:54.245038 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:55.265155 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:55.265155 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:56.286013 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:56.286013 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:57.307696 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:57.307696 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:58.330501 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:58.330501 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:59.352084 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:10:59.352084 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:00.386742 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:00.386742 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:01.410510 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:01.410510 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:02.432157 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:02.432157 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:03.457181 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:03.457181 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:04.220111 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:04.220111 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:06.268113 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:06.268113 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:07.290566 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:07.290566 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:09.336773 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:09.336773 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:10.363459 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:10.363459 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:11.388278 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:11.388278 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:13.435783 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:13.435783 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:15.222181 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:15.222181 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:16.241827 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:16.241827 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:18.289232 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:18.289232 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:19.314354 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:19.314354 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:20.334876 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:20.334876 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:21.354799 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:21.354799 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:22.375464 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:22.375464 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:23.397316 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:23.397316 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:24.417419 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:24.417419 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:25.438928 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:25.438928 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:26.223630 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:26.223630 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:27.244133 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:27.244133 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:30.307933 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:30.307933 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:31.331756 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:31.331756 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:32.351982 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:32.351982 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:33.374922 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:33.374922 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:34.398692 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:34.398692 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:35.422294 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:35.422294 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:37.228081 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:37.228081 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:38.251230 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:38.251230 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:40.300090 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:40.300090 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:41.333758 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:41.333758 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:42.354781 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:42.354781 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:43.376629 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:43.376629 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:45.419077 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:45.419077 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:46.440021 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:46.440021 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:47.460660 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:47.460660 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:48.222807 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:48.222807 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:49.246591 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:49.246591 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:52.311370 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:52.311370 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:53.334637 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:53.334637 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:54.359284 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:54.359284 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:55.386473 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:55.386473 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:56.408301 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:56.408301 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:57.428172 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:57.428172 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:59.487421 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:11:59.487421 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:00.507984 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:00.507984 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:02.571244 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:02.571244 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:03.593513 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:03.593513 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:04.615420 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:04.615420 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:05.636353 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:05.636353 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:06.658633 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:06.658633 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:07.678490 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:07.678490 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:08.708275 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:12:08.708275 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +Mar 7 03:12:08.708: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +Mar 7 03:12:08.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-4762" for this suite. 03/07/23 03:12:08.713 +{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","completed":89,"skipped":1622,"failed":0} +------------------------------ +• [SLOW TEST] [100.669 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:10:28.049 + Mar 7 03:10:28.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename gc 03/07/23 03:10:28.049 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:10:28.068 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:10:28.071 + [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 + STEP: create the rc 03/07/23 03:10:28.076 + STEP: delete the rc 03/07/23 03:10:33.096 + STEP: wait for the rc to be deleted 03/07/23 03:10:33.153 + Mar 7 03:10:34.338: INFO: 87 pods remaining + Mar 7 03:10:34.338: INFO: 84 pods has nil DeletionTimestamp + Mar 7 03:10:34.338: INFO: + Mar 7 03:10:35.380: INFO: 74 pods remaining + Mar 7 03:10:35.380: INFO: 73 pods has nil DeletionTimestamp + Mar 7 03:10:35.380: INFO: + Mar 7 03:10:36.488: INFO: 60 pods remaining + Mar 7 03:10:36.488: INFO: 60 pods has nil DeletionTimestamp + Mar 7 03:10:36.488: INFO: + Mar 7 03:10:37.171: INFO: 46 pods remaining + Mar 7 03:10:37.171: INFO: 46 pods has nil DeletionTimestamp + Mar 7 03:10:37.171: INFO: + Mar 7 03:10:38.298: INFO: 36 pods remaining + Mar 7 03:10:38.298: INFO: 33 pods has nil DeletionTimestamp + Mar 7 03:10:38.298: INFO: + Mar 7 03:10:39.171: INFO: 20 pods remaining + Mar 7 03:10:39.171: INFO: 20 pods has nil DeletionTimestamp + Mar 7 03:10:39.171: INFO: + Mar 7 03:10:40.192: INFO: 1 pods remaining + Mar 7 03:10:40.192: INFO: 1 pods has nil DeletionTimestamp + Mar 7 03:10:40.192: INFO: + STEP: Gathering metrics 03/07/23 03:10:41.169 + Mar 7 03:10:41.506: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" + Mar 7 03:10:41.510: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 3.640169ms + Mar 7 03:10:41.510: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) + Mar 7 03:10:41.510: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" + E0307 03:10:41.921468 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:42.963990 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:43.995832 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:45.016219 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:46.036502 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:48.086343 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:50.142359 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:51.162989 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:53.225296 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:54.245038 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:55.265155 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:56.286013 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:57.307696 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:58.330501 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:10:59.352084 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:00.386742 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:01.410510 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:02.432157 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:03.457181 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:04.220111 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:06.268113 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:07.290566 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:09.336773 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:10.363459 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:11.388278 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:13.435783 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:15.222181 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:16.241827 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:18.289232 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:19.314354 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:20.334876 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:21.354799 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:22.375464 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:23.397316 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:24.417419 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:25.438928 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:26.223630 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:27.244133 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:30.307933 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:31.331756 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:32.351982 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:33.374922 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:34.398692 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:35.422294 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:37.228081 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:38.251230 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:40.300090 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:41.333758 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:42.354781 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:43.376629 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:45.419077 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:46.440021 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:47.460660 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:48.222807 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:49.246591 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:52.311370 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:53.334637 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:54.359284 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:55.386473 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:56.408301 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:57.428172 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:11:59.487421 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:12:00.507984 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:12:02.571244 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:12:03.593513 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:12:04.615420 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:12:05.636353 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:12:06.658633 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:12:07.678490 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:12:08.708275 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + Mar 7 03:12:08.708: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 + Mar 7 03:12:08.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "gc-4762" for this suite. 03/07/23 03:12:08.713 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:56 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:12:08.718 +Mar 7 03:12:08.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:12:08.719 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:08.732 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:08.734 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:56 +STEP: Creating configMap with name configmap-test-volume-fd73dcb3-5900-4874-93f7-1f9d2b67bf79 03/07/23 03:12:08.735 +STEP: Creating a pod to test consume configMaps 03/07/23 03:12:08.739 +Mar 7 03:12:08.744: INFO: Waiting up to 5m0s for pod "pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5" in namespace "configmap-5662" to be "Succeeded or Failed" +Mar 7 03:12:08.747: INFO: Pod "pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.534817ms +Mar 7 03:12:10.750: INFO: Pod "pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006295246s +Mar 7 03:12:12.750: INFO: Pod "pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006167216s +STEP: Saw pod success 03/07/23 03:12:12.75 +Mar 7 03:12:12.750: INFO: Pod "pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5" satisfied condition "Succeeded or Failed" +Mar 7 03:12:12.753: INFO: Trying to get logs from node node-2 pod pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5 container agnhost-container: +STEP: delete the pod 03/07/23 03:12:12.764 +Mar 7 03:12:12.796: INFO: Waiting for pod pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5 to disappear +Mar 7 03:12:12.798: INFO: Pod pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:12:12.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5662" for this suite. 03/07/23 03:12:12.802 +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","completed":90,"skipped":1624,"failed":0} +------------------------------ +• [4.088 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:56 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:12:08.718 + Mar 7 03:12:08.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:12:08.719 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:08.732 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:08.734 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:56 + STEP: Creating configMap with name configmap-test-volume-fd73dcb3-5900-4874-93f7-1f9d2b67bf79 03/07/23 03:12:08.735 + STEP: Creating a pod to test consume configMaps 03/07/23 03:12:08.739 + Mar 7 03:12:08.744: INFO: Waiting up to 5m0s for pod "pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5" in namespace "configmap-5662" to be "Succeeded or Failed" + Mar 7 03:12:08.747: INFO: Pod "pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.534817ms + Mar 7 03:12:10.750: INFO: Pod "pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006295246s + Mar 7 03:12:12.750: INFO: Pod "pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006167216s + STEP: Saw pod success 03/07/23 03:12:12.75 + Mar 7 03:12:12.750: INFO: Pod "pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5" satisfied condition "Succeeded or Failed" + Mar 7 03:12:12.753: INFO: Trying to get logs from node node-2 pod pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5 container agnhost-container: + STEP: delete the pod 03/07/23 03:12:12.764 + Mar 7 03:12:12.796: INFO: Waiting for pod pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5 to disappear + Mar 7 03:12:12.798: INFO: Pod pod-configmaps-93357fc5-926e-4807-b5f9-7f220d63a6f5 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:12:12.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-5662" for this suite. 03/07/23 03:12:12.802 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:118 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:12:12.807 +Mar 7 03:12:12.807: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:12:12.807 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:12.835 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:12.837 +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:118 +STEP: Creating secret with name projected-secret-test-73f76bdc-6694-4734-a1d7-a72d9080f0bd 03/07/23 03:12:12.852 +STEP: Creating a pod to test consume secrets 03/07/23 03:12:12.879 +Mar 7 03:12:12.890: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5" in namespace "projected-5200" to be "Succeeded or Failed" +Mar 7 03:12:12.893: INFO: Pod "pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.823311ms +Mar 7 03:12:14.897: INFO: Pod "pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006454162s +Mar 7 03:12:16.897: INFO: Pod "pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006462052s +STEP: Saw pod success 03/07/23 03:12:16.897 +Mar 7 03:12:16.897: INFO: Pod "pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5" satisfied condition "Succeeded or Failed" +Mar 7 03:12:16.899: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5 container secret-volume-test: +STEP: delete the pod 03/07/23 03:12:16.904 +Mar 7 03:12:16.914: INFO: Waiting for pod pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5 to disappear +Mar 7 03:12:16.916: INFO: Pod pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +Mar 7 03:12:16.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5200" for this suite. 03/07/23 03:12:16.918 +{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","completed":91,"skipped":1636,"failed":0} +------------------------------ +• [4.117 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:118 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:12:12.807 + Mar 7 03:12:12.807: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:12:12.807 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:12.835 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:12.837 + [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:118 + STEP: Creating secret with name projected-secret-test-73f76bdc-6694-4734-a1d7-a72d9080f0bd 03/07/23 03:12:12.852 + STEP: Creating a pod to test consume secrets 03/07/23 03:12:12.879 + Mar 7 03:12:12.890: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5" in namespace "projected-5200" to be "Succeeded or Failed" + Mar 7 03:12:12.893: INFO: Pod "pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.823311ms + Mar 7 03:12:14.897: INFO: Pod "pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006454162s + Mar 7 03:12:16.897: INFO: Pod "pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006462052s + STEP: Saw pod success 03/07/23 03:12:16.897 + Mar 7 03:12:16.897: INFO: Pod "pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5" satisfied condition "Succeeded or Failed" + Mar 7 03:12:16.899: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5 container secret-volume-test: + STEP: delete the pod 03/07/23 03:12:16.904 + Mar 7 03:12:16.914: INFO: Waiting for pod pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5 to disappear + Mar 7 03:12:16.916: INFO: Pod pod-projected-secrets-f9931b98-3bf0-4aa1-b204-aa08f59e7fb5 no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 + Mar 7 03:12:16.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-5200" for this suite. 03/07/23 03:12:16.918 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:251 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:12:16.924 +Mar 7 03:12:16.924: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:12:16.925 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:16.936 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:16.938 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:12:16.954 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:12:17.226 +STEP: Deploying the webhook pod 03/07/23 03:12:17.233 +STEP: Wait for the deployment to be ready 03/07/23 03:12:17.241 +Mar 7 03:12:17.256: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 03:12:19.265 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:12:19.278 +Mar 7 03:12:20.279: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:251 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API 03/07/23 03:12:20.283 +STEP: create a configmap that should be updated by the webhook 03/07/23 03:12:20.295 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:12:20.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-115" for this suite. 03/07/23 03:12:20.313 +STEP: Destroying namespace "webhook-115-markers" for this suite. 03/07/23 03:12:20.317 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","completed":92,"skipped":1658,"failed":0} +------------------------------ +• [3.466 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:251 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:12:16.924 + Mar 7 03:12:16.924: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:12:16.925 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:16.936 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:16.938 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:12:16.954 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:12:17.226 + STEP: Deploying the webhook pod 03/07/23 03:12:17.233 + STEP: Wait for the deployment to be ready 03/07/23 03:12:17.241 + Mar 7 03:12:17.256: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 03:12:19.265 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:12:19.278 + Mar 7 03:12:20.279: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:251 + STEP: Registering the mutating configmap webhook via the AdmissionRegistration API 03/07/23 03:12:20.283 + STEP: create a configmap that should be updated by the webhook 03/07/23 03:12:20.295 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:12:20.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-115" for this suite. 03/07/23 03:12:20.313 + STEP: Destroying namespace "webhook-115-markers" for this suite. 03/07/23 03:12:20.317 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:12:20.393 +Mar 7 03:12:20.393: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename watch 03/07/23 03:12:20.394 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:20.465 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:20.468 +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 +STEP: creating a watch on configmaps with label A 03/07/23 03:12:20.475 +STEP: creating a watch on configmaps with label B 03/07/23 03:12:20.476 +STEP: creating a watch on configmaps with label A or B 03/07/23 03:12:20.477 +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification 03/07/23 03:12:20.48 +Mar 7 03:12:20.485: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51202 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:12:20.486: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51202 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification 03/07/23 03:12:20.486 +Mar 7 03:12:20.528: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51203 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:12:20.528: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51203 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification 03/07/23 03:12:20.528 +Mar 7 03:12:20.534: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51204 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:12:20.534: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51204 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification 03/07/23 03:12:20.534 +Mar 7 03:12:20.585: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51205 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:12:20.585: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51205 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification 03/07/23 03:12:20.585 +Mar 7 03:12:20.589: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3994 a9c3ddb3-7f16-48d3-8c2a-5bca5c2f62d7 51206 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:12:20.590: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3994 a9c3ddb3-7f16-48d3-8c2a-5bca5c2f62d7 51206 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification 03/07/23 03:12:30.59 +Mar 7 03:12:30.596: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3994 a9c3ddb3-7f16-48d3-8c2a-5bca5c2f62d7 51271 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:12:30.596: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3994 a9c3ddb3-7f16-48d3-8c2a-5bca5c2f62d7 51271 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 +Mar 7 03:12:40.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-3994" for this suite. 03/07/23 03:12:40.603 +{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","completed":93,"skipped":1688,"failed":0} +------------------------------ +• [SLOW TEST] [20.231 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:12:20.393 + Mar 7 03:12:20.393: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename watch 03/07/23 03:12:20.394 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:20.465 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:20.468 + [It] should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 + STEP: creating a watch on configmaps with label A 03/07/23 03:12:20.475 + STEP: creating a watch on configmaps with label B 03/07/23 03:12:20.476 + STEP: creating a watch on configmaps with label A or B 03/07/23 03:12:20.477 + STEP: creating a configmap with label A and ensuring the correct watchers observe the notification 03/07/23 03:12:20.48 + Mar 7 03:12:20.485: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51202 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:12:20.486: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51202 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying configmap A and ensuring the correct watchers observe the notification 03/07/23 03:12:20.486 + Mar 7 03:12:20.528: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51203 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:12:20.528: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51203 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying configmap A again and ensuring the correct watchers observe the notification 03/07/23 03:12:20.528 + Mar 7 03:12:20.534: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51204 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:12:20.534: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51204 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: deleting configmap A and ensuring the correct watchers observe the notification 03/07/23 03:12:20.534 + Mar 7 03:12:20.585: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51205 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:12:20.585: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3994 f6920634-b667-4ec9-b0ac-bc57a417b49a 51205 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: creating a configmap with label B and ensuring the correct watchers observe the notification 03/07/23 03:12:20.585 + Mar 7 03:12:20.589: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3994 a9c3ddb3-7f16-48d3-8c2a-5bca5c2f62d7 51206 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:12:20.590: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3994 a9c3ddb3-7f16-48d3-8c2a-5bca5c2f62d7 51206 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: deleting configmap B and ensuring the correct watchers observe the notification 03/07/23 03:12:30.59 + Mar 7 03:12:30.596: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3994 a9c3ddb3-7f16-48d3-8c2a-5bca5c2f62d7 51271 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:12:30.596: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3994 a9c3ddb3-7f16-48d3-8c2a-5bca5c2f62d7 51271 0 2023-03-07 03:12:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-07 03:12:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 + Mar 7 03:12:40.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "watch-3994" for this suite. 03/07/23 03:12:40.603 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 +[BeforeEach] [sig-network] HostPort + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:12:40.624 +Mar 7 03:12:40.624: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename hostport 03/07/23 03:12:40.625 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:40.657 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:40.659 +[BeforeEach] [sig-network] HostPort + test/e2e/network/hostport.go:49 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 03/07/23 03:12:40.664 +Mar 7 03:12:40.671: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-4522" to be "running and ready" +Mar 7 03:12:40.673: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657511ms +Mar 7 03:12:40.673: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:12:42.676: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.005837578s +Mar 7 03:12:42.676: INFO: The phase of Pod pod1 is Running (Ready = true) +Mar 7 03:12:42.676: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 192.168.1.101 on the node which pod1 resides and expect scheduled 03/07/23 03:12:42.676 +Mar 7 03:12:42.681: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-4522" to be "running and ready" +Mar 7 03:12:42.683: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.859366ms +Mar 7 03:12:42.683: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:12:44.686: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 2.00455574s +Mar 7 03:12:44.686: INFO: The phase of Pod pod2 is Running (Ready = false) +Mar 7 03:12:46.688: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.006415419s +Mar 7 03:12:46.688: INFO: The phase of Pod pod2 is Running (Ready = true) +Mar 7 03:12:46.688: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 192.168.1.101 but use UDP protocol on the node which pod2 resides 03/07/23 03:12:46.688 +Mar 7 03:12:46.711: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-4522" to be "running and ready" +Mar 7 03:12:46.714: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.494636ms +Mar 7 03:12:46.714: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:12:48.718: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 2.006496524s +Mar 7 03:12:48.718: INFO: The phase of Pod pod3 is Running (Ready = true) +Mar 7 03:12:48.718: INFO: Pod "pod3" satisfied condition "running and ready" +Mar 7 03:12:48.722: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-4522" to be "running and ready" +Mar 7 03:12:48.724: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069141ms +Mar 7 03:12:48.724: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:12:50.728: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.005835715s +Mar 7 03:12:50.728: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) +Mar 7 03:12:50.728: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 03/07/23 03:12:50.73 +Mar 7 03:12:50.730: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 192.168.1.101 http://127.0.0.1:54323/hostname] Namespace:hostport-4522 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:12:50.731: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:12:50.731: INFO: ExecWithOptions: Clientset creation +Mar 7 03:12:50.731: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-4522/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+192.168.1.101+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 192.168.1.101, port: 54323 03/07/23 03:12:50.797 +Mar 7 03:12:50.797: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://192.168.1.101:54323/hostname] Namespace:hostport-4522 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:12:50.797: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:12:50.798: INFO: ExecWithOptions: Clientset creation +Mar 7 03:12:50.798: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-4522/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F192.168.1.101%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 192.168.1.101, port: 54323 UDP 03/07/23 03:12:50.855 +Mar 7 03:12:50.855: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostname | nc -u -w 5 192.168.1.101 54323] Namespace:hostport-4522 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:12:50.855: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:12:50.855: INFO: ExecWithOptions: Clientset creation +Mar 7 03:12:50.855: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-4522/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostname+%7C+nc+-u+-w+5+192.168.1.101+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +[AfterEach] [sig-network] HostPort + test/e2e/framework/framework.go:187 +Mar 7 03:12:55.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostport-4522" for this suite. 03/07/23 03:12:55.92 +{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","completed":94,"skipped":1695,"failed":0} +------------------------------ +• [SLOW TEST] [15.301 seconds] +[sig-network] HostPort +test/e2e/network/common/framework.go:23 + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] HostPort + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:12:40.624 + Mar 7 03:12:40.624: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename hostport 03/07/23 03:12:40.625 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:40.657 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:40.659 + [BeforeEach] [sig-network] HostPort + test/e2e/network/hostport.go:49 + [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 + STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 03/07/23 03:12:40.664 + Mar 7 03:12:40.671: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-4522" to be "running and ready" + Mar 7 03:12:40.673: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657511ms + Mar 7 03:12:40.673: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:12:42.676: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.005837578s + Mar 7 03:12:42.676: INFO: The phase of Pod pod1 is Running (Ready = true) + Mar 7 03:12:42.676: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 192.168.1.101 on the node which pod1 resides and expect scheduled 03/07/23 03:12:42.676 + Mar 7 03:12:42.681: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-4522" to be "running and ready" + Mar 7 03:12:42.683: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.859366ms + Mar 7 03:12:42.683: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:12:44.686: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 2.00455574s + Mar 7 03:12:44.686: INFO: The phase of Pod pod2 is Running (Ready = false) + Mar 7 03:12:46.688: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 4.006415419s + Mar 7 03:12:46.688: INFO: The phase of Pod pod2 is Running (Ready = true) + Mar 7 03:12:46.688: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 192.168.1.101 but use UDP protocol on the node which pod2 resides 03/07/23 03:12:46.688 + Mar 7 03:12:46.711: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-4522" to be "running and ready" + Mar 7 03:12:46.714: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.494636ms + Mar 7 03:12:46.714: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:12:48.718: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 2.006496524s + Mar 7 03:12:48.718: INFO: The phase of Pod pod3 is Running (Ready = true) + Mar 7 03:12:48.718: INFO: Pod "pod3" satisfied condition "running and ready" + Mar 7 03:12:48.722: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-4522" to be "running and ready" + Mar 7 03:12:48.724: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069141ms + Mar 7 03:12:48.724: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:12:50.728: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.005835715s + Mar 7 03:12:50.728: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) + Mar 7 03:12:50.728: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" + STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 03/07/23 03:12:50.73 + Mar 7 03:12:50.730: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 192.168.1.101 http://127.0.0.1:54323/hostname] Namespace:hostport-4522 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:12:50.731: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:12:50.731: INFO: ExecWithOptions: Clientset creation + Mar 7 03:12:50.731: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-4522/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+192.168.1.101+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + STEP: checking connectivity from pod e2e-host-exec to serverIP: 192.168.1.101, port: 54323 03/07/23 03:12:50.797 + Mar 7 03:12:50.797: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://192.168.1.101:54323/hostname] Namespace:hostport-4522 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:12:50.797: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:12:50.798: INFO: ExecWithOptions: Clientset creation + Mar 7 03:12:50.798: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-4522/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F192.168.1.101%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + STEP: checking connectivity from pod e2e-host-exec to serverIP: 192.168.1.101, port: 54323 UDP 03/07/23 03:12:50.855 + Mar 7 03:12:50.855: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostname | nc -u -w 5 192.168.1.101 54323] Namespace:hostport-4522 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:12:50.855: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:12:50.855: INFO: ExecWithOptions: Clientset creation + Mar 7 03:12:50.855: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-4522/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostname+%7C+nc+-u+-w+5+192.168.1.101+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + [AfterEach] [sig-network] HostPort + test/e2e/framework/framework.go:187 + Mar 7 03:12:55.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "hostport-4522" for this suite. 03/07/23 03:12:55.92 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:224 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:12:55.925 +Mar 7 03:12:55.925: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename var-expansion 03/07/23 03:12:55.926 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:55.938 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:55.939 +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:224 +STEP: creating the pod with failed condition 03/07/23 03:12:55.941 +Mar 7 03:12:55.954: INFO: Waiting up to 2m0s for pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" in namespace "var-expansion-860" to be "running" +Mar 7 03:12:55.957: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682776ms +Mar 7 03:12:57.983: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028589723s +Mar 7 03:12:59.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005883665s +Mar 7 03:13:01.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00701734s +Mar 7 03:13:03.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006410198s +Mar 7 03:13:05.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 10.006201767s +Mar 7 03:13:07.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 12.00579057s +Mar 7 03:13:09.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 14.005910973s +Mar 7 03:13:11.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 16.005887774s +Mar 7 03:13:13.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 18.006129477s +Mar 7 03:13:15.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 20.006225818s +Mar 7 03:13:17.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 22.006101928s +Mar 7 03:13:19.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 24.006280309s +Mar 7 03:13:21.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 26.006197281s +Mar 7 03:13:23.968: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 28.013755407s +Mar 7 03:13:25.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 30.007812748s +Mar 7 03:13:27.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 32.007272404s +Mar 7 03:13:29.963: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 34.009237046s +Mar 7 03:13:31.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 36.006017525s +Mar 7 03:13:33.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 38.006028881s +Mar 7 03:13:35.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 40.006639113s +Mar 7 03:13:37.970: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 42.016388847s +Mar 7 03:13:39.965: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 44.010824146s +Mar 7 03:13:41.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 46.007623959s +Mar 7 03:13:43.969: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 48.014441864s +Mar 7 03:13:45.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 50.007619884s +Mar 7 03:13:47.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 52.007673276s +Mar 7 03:13:49.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 54.007221417s +Mar 7 03:13:51.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 56.006037109s +Mar 7 03:13:53.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 58.006785678s +Mar 7 03:13:55.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.007824363s +Mar 7 03:13:57.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.006815143s +Mar 7 03:13:59.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.005579184s +Mar 7 03:14:01.966: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.011999913s +Mar 7 03:14:03.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.007778887s +Mar 7 03:14:05.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.006013794s +Mar 7 03:14:07.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.008132939s +Mar 7 03:14:09.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.006003575s +Mar 7 03:14:11.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.006669793s +Mar 7 03:14:13.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.005841186s +Mar 7 03:14:15.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.006127968s +Mar 7 03:14:17.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.006371549s +Mar 7 03:14:19.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.006080827s +Mar 7 03:14:21.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.006031934s +Mar 7 03:14:23.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.006398176s +Mar 7 03:14:25.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.005965321s +Mar 7 03:14:27.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.006774787s +Mar 7 03:14:29.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.005926108s +Mar 7 03:14:31.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.006885757s +Mar 7 03:14:33.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.006487465s +Mar 7 03:14:35.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.007345835s +Mar 7 03:14:37.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.005843871s +Mar 7 03:14:39.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.006411805s +Mar 7 03:14:41.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.007181691s +Mar 7 03:14:43.970: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.016029169s +Mar 7 03:14:45.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.007430753s +Mar 7 03:14:47.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.007150128s +Mar 7 03:14:49.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.006372353s +Mar 7 03:14:51.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.00614224s +Mar 7 03:14:53.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.006009267s +Mar 7 03:14:55.959: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.005208128s +Mar 7 03:14:55.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.00734769s +STEP: updating the pod 03/07/23 03:14:55.961 +Mar 7 03:14:56.487: INFO: Successfully updated pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" +STEP: waiting for pod running 03/07/23 03:14:56.487 +Mar 7 03:14:56.488: INFO: Waiting up to 2m0s for pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" in namespace "var-expansion-860" to be "running" +Mar 7 03:14:56.493: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 5.872572ms +Mar 7 03:14:58.497: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Running", Reason="", readiness=true. Elapsed: 2.009887296s +Mar 7 03:14:58.497: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" satisfied condition "running" +STEP: deleting the pod gracefully 03/07/23 03:14:58.497 +Mar 7 03:14:58.498: INFO: Deleting pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" in namespace "var-expansion-860" +Mar 7 03:14:58.540: INFO: Wait up to 5m0s for pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +Mar 7 03:15:30.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-860" for this suite. 03/07/23 03:15:30.553 +{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","completed":95,"skipped":1703,"failed":0} +------------------------------ +• [SLOW TEST] [154.634 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:224 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:12:55.925 + Mar 7 03:12:55.925: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename var-expansion 03/07/23 03:12:55.926 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:12:55.938 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:12:55.939 + [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:224 + STEP: creating the pod with failed condition 03/07/23 03:12:55.941 + Mar 7 03:12:55.954: INFO: Waiting up to 2m0s for pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" in namespace "var-expansion-860" to be "running" + Mar 7 03:12:55.957: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682776ms + Mar 7 03:12:57.983: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028589723s + Mar 7 03:12:59.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005883665s + Mar 7 03:13:01.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00701734s + Mar 7 03:13:03.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006410198s + Mar 7 03:13:05.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 10.006201767s + Mar 7 03:13:07.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 12.00579057s + Mar 7 03:13:09.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 14.005910973s + Mar 7 03:13:11.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 16.005887774s + Mar 7 03:13:13.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 18.006129477s + Mar 7 03:13:15.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 20.006225818s + Mar 7 03:13:17.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 22.006101928s + Mar 7 03:13:19.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 24.006280309s + Mar 7 03:13:21.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 26.006197281s + Mar 7 03:13:23.968: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 28.013755407s + Mar 7 03:13:25.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 30.007812748s + Mar 7 03:13:27.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 32.007272404s + Mar 7 03:13:29.963: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 34.009237046s + Mar 7 03:13:31.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 36.006017525s + Mar 7 03:13:33.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 38.006028881s + Mar 7 03:13:35.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 40.006639113s + Mar 7 03:13:37.970: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 42.016388847s + Mar 7 03:13:39.965: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 44.010824146s + Mar 7 03:13:41.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 46.007623959s + Mar 7 03:13:43.969: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 48.014441864s + Mar 7 03:13:45.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 50.007619884s + Mar 7 03:13:47.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 52.007673276s + Mar 7 03:13:49.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 54.007221417s + Mar 7 03:13:51.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 56.006037109s + Mar 7 03:13:53.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 58.006785678s + Mar 7 03:13:55.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.007824363s + Mar 7 03:13:57.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.006815143s + Mar 7 03:13:59.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.005579184s + Mar 7 03:14:01.966: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.011999913s + Mar 7 03:14:03.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.007778887s + Mar 7 03:14:05.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.006013794s + Mar 7 03:14:07.962: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.008132939s + Mar 7 03:14:09.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.006003575s + Mar 7 03:14:11.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.006669793s + Mar 7 03:14:13.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.005841186s + Mar 7 03:14:15.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.006127968s + Mar 7 03:14:17.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.006371549s + Mar 7 03:14:19.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.006080827s + Mar 7 03:14:21.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.006031934s + Mar 7 03:14:23.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.006398176s + Mar 7 03:14:25.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.005965321s + Mar 7 03:14:27.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.006774787s + Mar 7 03:14:29.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.005926108s + Mar 7 03:14:31.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.006885757s + Mar 7 03:14:33.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.006487465s + Mar 7 03:14:35.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.007345835s + Mar 7 03:14:37.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.005843871s + Mar 7 03:14:39.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.006411805s + Mar 7 03:14:41.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.007181691s + Mar 7 03:14:43.970: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.016029169s + Mar 7 03:14:45.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.007430753s + Mar 7 03:14:47.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.007150128s + Mar 7 03:14:49.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.006372353s + Mar 7 03:14:51.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.00614224s + Mar 7 03:14:53.960: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.006009267s + Mar 7 03:14:55.959: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.005208128s + Mar 7 03:14:55.961: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.00734769s + STEP: updating the pod 03/07/23 03:14:55.961 + Mar 7 03:14:56.487: INFO: Successfully updated pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" + STEP: waiting for pod running 03/07/23 03:14:56.487 + Mar 7 03:14:56.488: INFO: Waiting up to 2m0s for pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" in namespace "var-expansion-860" to be "running" + Mar 7 03:14:56.493: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Pending", Reason="", readiness=false. Elapsed: 5.872572ms + Mar 7 03:14:58.497: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32": Phase="Running", Reason="", readiness=true. Elapsed: 2.009887296s + Mar 7 03:14:58.497: INFO: Pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" satisfied condition "running" + STEP: deleting the pod gracefully 03/07/23 03:14:58.497 + Mar 7 03:14:58.498: INFO: Deleting pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" in namespace "var-expansion-860" + Mar 7 03:14:58.540: INFO: Wait up to 5m0s for pod "var-expansion-10ecde5a-34f3-4bb4-87bf-758bb51f5b32" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 + Mar 7 03:15:30.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "var-expansion-860" for this suite. 03/07/23 03:15:30.553 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:73 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:15:30.561 +Mar 7 03:15:30.561: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:15:30.562 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:15:30.586 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:15:30.588 +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:73 +STEP: Creating configMap with name configmap-test-volume-67bb43a0-c325-4622-9933-03559b5b35d4 03/07/23 03:15:30.59 +STEP: Creating a pod to test consume configMaps 03/07/23 03:15:30.593 +Mar 7 03:15:30.601: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b" in namespace "configmap-1818" to be "Succeeded or Failed" +Mar 7 03:15:30.610: INFO: Pod "pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.864241ms +Mar 7 03:15:32.614: INFO: Pod "pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012780857s +Mar 7 03:15:34.613: INFO: Pod "pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012006127s +STEP: Saw pod success 03/07/23 03:15:34.613 +Mar 7 03:15:34.614: INFO: Pod "pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b" satisfied condition "Succeeded or Failed" +Mar 7 03:15:34.616: INFO: Trying to get logs from node node-2 pod pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b container agnhost-container: +STEP: delete the pod 03/07/23 03:15:34.627 +Mar 7 03:15:34.635: INFO: Waiting for pod pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b to disappear +Mar 7 03:15:34.637: INFO: Pod pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:15:34.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1818" for this suite. 03/07/23 03:15:34.64 +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","completed":96,"skipped":1725,"failed":0} +------------------------------ +• [4.083 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:73 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:15:30.561 + Mar 7 03:15:30.561: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:15:30.562 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:15:30.586 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:15:30.588 + [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:73 + STEP: Creating configMap with name configmap-test-volume-67bb43a0-c325-4622-9933-03559b5b35d4 03/07/23 03:15:30.59 + STEP: Creating a pod to test consume configMaps 03/07/23 03:15:30.593 + Mar 7 03:15:30.601: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b" in namespace "configmap-1818" to be "Succeeded or Failed" + Mar 7 03:15:30.610: INFO: Pod "pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.864241ms + Mar 7 03:15:32.614: INFO: Pod "pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012780857s + Mar 7 03:15:34.613: INFO: Pod "pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012006127s + STEP: Saw pod success 03/07/23 03:15:34.613 + Mar 7 03:15:34.614: INFO: Pod "pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b" satisfied condition "Succeeded or Failed" + Mar 7 03:15:34.616: INFO: Trying to get logs from node node-2 pod pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b container agnhost-container: + STEP: delete the pod 03/07/23 03:15:34.627 + Mar 7 03:15:34.635: INFO: Waiting for pod pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b to disappear + Mar 7 03:15:34.637: INFO: Pod pod-configmaps-d3a6f636-45f3-4d85-819b-0630a885906b no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:15:34.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-1818" for this suite. 03/07/23 03:15:34.64 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:15:34.644 +Mar 7 03:15:34.644: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename dns 03/07/23 03:15:34.645 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:15:34.657 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:15:34.658 +[It] should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 03/07/23 03:15:34.66 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 03/07/23 03:15:34.66 +STEP: creating a pod to probe DNS 03/07/23 03:15:34.66 +STEP: submitting the pod to kubernetes 03/07/23 03:15:34.66 +Mar 7 03:15:34.667: INFO: Waiting up to 15m0s for pod "dns-test-5a93f894-6c6f-4921-b87f-747b0eeaf126" in namespace "dns-6375" to be "running" +Mar 7 03:15:34.669: INFO: Pod "dns-test-5a93f894-6c6f-4921-b87f-747b0eeaf126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432353ms +Mar 7 03:15:36.674: INFO: Pod "dns-test-5a93f894-6c6f-4921-b87f-747b0eeaf126": Phase="Running", Reason="", readiness=true. Elapsed: 2.006696464s +Mar 7 03:15:36.674: INFO: Pod "dns-test-5a93f894-6c6f-4921-b87f-747b0eeaf126" satisfied condition "running" +STEP: retrieving the pod 03/07/23 03:15:36.674 +STEP: looking for the results for each expected name from probers 03/07/23 03:15:36.676 +Mar 7 03:15:36.686: INFO: DNS probes using dns-6375/dns-test-5a93f894-6c6f-4921-b87f-747b0eeaf126 succeeded + +STEP: deleting the pod 03/07/23 03:15:36.686 +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +Mar 7 03:15:36.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-6375" for this suite. 03/07/23 03:15:36.707 +{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","completed":97,"skipped":1726,"failed":0} +------------------------------ +• [2.070 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:15:34.644 + Mar 7 03:15:34.644: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename dns 03/07/23 03:15:34.645 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:15:34.657 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:15:34.658 + [It] should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 03/07/23 03:15:34.66 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 03/07/23 03:15:34.66 + STEP: creating a pod to probe DNS 03/07/23 03:15:34.66 + STEP: submitting the pod to kubernetes 03/07/23 03:15:34.66 + Mar 7 03:15:34.667: INFO: Waiting up to 15m0s for pod "dns-test-5a93f894-6c6f-4921-b87f-747b0eeaf126" in namespace "dns-6375" to be "running" + Mar 7 03:15:34.669: INFO: Pod "dns-test-5a93f894-6c6f-4921-b87f-747b0eeaf126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432353ms + Mar 7 03:15:36.674: INFO: Pod "dns-test-5a93f894-6c6f-4921-b87f-747b0eeaf126": Phase="Running", Reason="", readiness=true. Elapsed: 2.006696464s + Mar 7 03:15:36.674: INFO: Pod "dns-test-5a93f894-6c6f-4921-b87f-747b0eeaf126" satisfied condition "running" + STEP: retrieving the pod 03/07/23 03:15:36.674 + STEP: looking for the results for each expected name from probers 03/07/23 03:15:36.676 + Mar 7 03:15:36.686: INFO: DNS probes using dns-6375/dns-test-5a93f894-6c6f-4921-b87f-747b0eeaf126 succeeded + + STEP: deleting the pod 03/07/23 03:15:36.686 + [AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 + Mar 7 03:15:36.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "dns-6375" for this suite. 03/07/23 03:15:36.707 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:186 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:15:36.714 +Mar 7 03:15:36.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:15:36.715 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:15:36.73 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:15:36.732 +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:186 +STEP: Creating a pod to test emptydir 0777 on node default medium 03/07/23 03:15:36.733 +Mar 7 03:15:36.739: INFO: Waiting up to 5m0s for pod "pod-b11f7114-5f81-4874-b2c7-b50bce820bcd" in namespace "emptydir-7617" to be "Succeeded or Failed" +Mar 7 03:15:36.741: INFO: Pod "pod-b11f7114-5f81-4874-b2c7-b50bce820bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355329ms +Mar 7 03:15:38.745: INFO: Pod "pod-b11f7114-5f81-4874-b2c7-b50bce820bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006334742s +Mar 7 03:15:40.747: INFO: Pod "pod-b11f7114-5f81-4874-b2c7-b50bce820bcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008247153s +STEP: Saw pod success 03/07/23 03:15:40.747 +Mar 7 03:15:40.747: INFO: Pod "pod-b11f7114-5f81-4874-b2c7-b50bce820bcd" satisfied condition "Succeeded or Failed" +Mar 7 03:15:40.750: INFO: Trying to get logs from node node-2 pod pod-b11f7114-5f81-4874-b2c7-b50bce820bcd container test-container: +STEP: delete the pod 03/07/23 03:15:40.756 +Mar 7 03:15:40.769: INFO: Waiting for pod pod-b11f7114-5f81-4874-b2c7-b50bce820bcd to disappear +Mar 7 03:15:40.772: INFO: Pod pod-b11f7114-5f81-4874-b2c7-b50bce820bcd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:15:40.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7617" for this suite. 03/07/23 03:15:40.776 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":98,"skipped":1731,"failed":0} +------------------------------ +• [4.067 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:186 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:15:36.714 + Mar 7 03:15:36.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:15:36.715 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:15:36.73 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:15:36.732 + [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:186 + STEP: Creating a pod to test emptydir 0777 on node default medium 03/07/23 03:15:36.733 + Mar 7 03:15:36.739: INFO: Waiting up to 5m0s for pod "pod-b11f7114-5f81-4874-b2c7-b50bce820bcd" in namespace "emptydir-7617" to be "Succeeded or Failed" + Mar 7 03:15:36.741: INFO: Pod "pod-b11f7114-5f81-4874-b2c7-b50bce820bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355329ms + Mar 7 03:15:38.745: INFO: Pod "pod-b11f7114-5f81-4874-b2c7-b50bce820bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006334742s + Mar 7 03:15:40.747: INFO: Pod "pod-b11f7114-5f81-4874-b2c7-b50bce820bcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008247153s + STEP: Saw pod success 03/07/23 03:15:40.747 + Mar 7 03:15:40.747: INFO: Pod "pod-b11f7114-5f81-4874-b2c7-b50bce820bcd" satisfied condition "Succeeded or Failed" + Mar 7 03:15:40.750: INFO: Trying to get logs from node node-2 pod pod-b11f7114-5f81-4874-b2c7-b50bce820bcd container test-container: + STEP: delete the pod 03/07/23 03:15:40.756 + Mar 7 03:15:40.769: INFO: Waiting for pod pod-b11f7114-5f81-4874-b2c7-b50bce820bcd to disappear + Mar 7 03:15:40.772: INFO: Pod pod-b11f7114-5f81-4874-b2c7-b50bce820bcd no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:15:40.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-7617" for this suite. 03/07/23 03:15:40.776 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:15:40.782 +Mar 7 03:15:40.783: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename gc 03/07/23 03:15:40.783 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:15:40.794 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:15:40.796 +[It] should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 +STEP: create the rc 03/07/23 03:15:40.797 +STEP: delete the rc 03/07/23 03:15:45.805 +STEP: wait for all pods to be garbage collected 03/07/23 03:15:45.831 +STEP: Gathering metrics 03/07/23 03:15:50.836 +Mar 7 03:15:50.852: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" +Mar 7 03:15:50.854: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.532279ms +Mar 7 03:15:50.854: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) +Mar 7 03:15:50.855: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" +E0307 03:15:52.926496 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:52.926496 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:53.947323 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:53.947323 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:54.995570 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:54.995570 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:57.047676 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:57.047676 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:58.069220 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:58.069220 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:59.091816 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:15:59.091816 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:00.115090 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:00.115090 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:01.139922 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:01.139922 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:02.177046 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:02.177046 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:03.198881 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:03.198881 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:05.245255 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:05.245255 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:06.267716 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:06.267716 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:07.289224 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:07.289224 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:09.332445 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:09.332445 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:12.406525 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:12.406525 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:16.226527 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:16.226527 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:17.249683 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:17.249683 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:19.293911 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:19.293911 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:20.315597 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:20.315597 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:22.360385 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:22.360385 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:23.385676 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:23.385676 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:24.165390 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:24.165390 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:25.194224 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:25.194224 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:26.213985 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:26.213985 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:28.259744 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:28.259744 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:29.279928 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:29.279928 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:30.301107 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:30.301107 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:31.321716 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:31.321716 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:32.341600 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:32.341600 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:33.362609 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:33.362609 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:34.384687 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:34.384687 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:35.164531 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:35.164531 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:36.186009 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:36.186009 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:37.210508 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:37.210508 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:38.233007 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:38.233007 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:42.326742 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:42.326742 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:43.354906 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:43.354906 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:44.402182 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:44.402182 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:45.422063 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:45.422063 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:46.163682 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:46.163682 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:47.191227 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:47.191227 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:48.214324 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:48.214324 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:50.258295 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:50.258295 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:51.322699 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:51.322699 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:53.364283 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:53.364283 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:54.389179 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:54.389179 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:55.408591 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:55.408591 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:56.432539 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:56.432539 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:57.160895 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:57.160895 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:58.179934 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:16:58.179934 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:01.251731 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:01.251731 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:02.273891 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:02.273891 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:03.301889 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:03.301889 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:05.346266 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:05.346266 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:07.446951 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:07.446951 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:07.464432 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:07.464432 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:08.499540 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:08.499540 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:09.532718 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:09.532718 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:10.556558 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:10.556558 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:12.598913 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:12.598913 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:14.640006 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:14.640006 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:15.662326 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:15.662326 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:16.683756 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:17:16.683756 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +Mar 7 03:17:17.704: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +Mar 7 03:17:17.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1617" for this suite. 03/07/23 03:17:17.708 +{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","completed":99,"skipped":1748,"failed":0} +------------------------------ +• [SLOW TEST] [96.930 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:15:40.782 + Mar 7 03:15:40.783: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename gc 03/07/23 03:15:40.783 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:15:40.794 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:15:40.796 + [It] should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 + STEP: create the rc 03/07/23 03:15:40.797 + STEP: delete the rc 03/07/23 03:15:45.805 + STEP: wait for all pods to be garbage collected 03/07/23 03:15:45.831 + STEP: Gathering metrics 03/07/23 03:15:50.836 + Mar 7 03:15:50.852: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" + Mar 7 03:15:50.854: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.532279ms + Mar 7 03:15:50.854: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) + Mar 7 03:15:50.855: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" + E0307 03:15:52.926496 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:15:53.947323 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:15:54.995570 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:15:57.047676 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:15:58.069220 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:15:59.091816 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:00.115090 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:01.139922 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:02.177046 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:03.198881 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:05.245255 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:06.267716 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:07.289224 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:09.332445 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:12.406525 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:16.226527 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:17.249683 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:19.293911 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:20.315597 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:22.360385 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:23.385676 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:24.165390 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:25.194224 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:26.213985 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:28.259744 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:29.279928 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:30.301107 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:31.321716 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:32.341600 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:33.362609 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:34.384687 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:35.164531 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:36.186009 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:37.210508 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:38.233007 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:42.326742 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:43.354906 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:44.402182 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:45.422063 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:46.163682 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:47.191227 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:48.214324 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:50.258295 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:51.322699 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:53.364283 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:54.389179 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:55.408591 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:56.432539 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:57.160895 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:16:58.179934 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:01.251731 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:02.273891 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:03.301889 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:05.346266 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:07.446951 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:07.464432 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:08.499540 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:09.532718 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:10.556558 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:12.598913 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:14.640006 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:15.662326 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:17:16.683756 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + Mar 7 03:17:17.704: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 + Mar 7 03:17:17.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "gc-1617" for this suite. 03/07/23 03:17:17.708 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:66 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:17:17.713 +Mar 7 03:17:17.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:17:17.715 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:17.727 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:17.729 +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:66 +STEP: Creating projection with secret that has name projected-secret-test-bd5d75ae-3dd1-4772-9680-a0d2e6b91e63 03/07/23 03:17:17.731 +STEP: Creating a pod to test consume secrets 03/07/23 03:17:17.735 +Mar 7 03:17:17.746: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee" in namespace "projected-7880" to be "Succeeded or Failed" +Mar 7 03:17:17.755: INFO: Pod "pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee": Phase="Pending", Reason="", readiness=false. Elapsed: 9.076568ms +Mar 7 03:17:19.758: INFO: Pod "pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012698684s +Mar 7 03:17:21.758: INFO: Pod "pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012191177s +STEP: Saw pod success 03/07/23 03:17:21.758 +Mar 7 03:17:21.758: INFO: Pod "pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee" satisfied condition "Succeeded or Failed" +Mar 7 03:17:21.760: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee container projected-secret-volume-test: +STEP: delete the pod 03/07/23 03:17:21.771 +Mar 7 03:17:21.789: INFO: Waiting for pod pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee to disappear +Mar 7 03:17:21.792: INFO: Pod pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +Mar 7 03:17:21.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7880" for this suite. 03/07/23 03:17:21.795 +{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","completed":100,"skipped":1774,"failed":0} +------------------------------ +• [4.087 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:66 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:17:17.713 + Mar 7 03:17:17.714: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:17:17.715 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:17.727 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:17.729 + [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:66 + STEP: Creating projection with secret that has name projected-secret-test-bd5d75ae-3dd1-4772-9680-a0d2e6b91e63 03/07/23 03:17:17.731 + STEP: Creating a pod to test consume secrets 03/07/23 03:17:17.735 + Mar 7 03:17:17.746: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee" in namespace "projected-7880" to be "Succeeded or Failed" + Mar 7 03:17:17.755: INFO: Pod "pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee": Phase="Pending", Reason="", readiness=false. Elapsed: 9.076568ms + Mar 7 03:17:19.758: INFO: Pod "pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012698684s + Mar 7 03:17:21.758: INFO: Pod "pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012191177s + STEP: Saw pod success 03/07/23 03:17:21.758 + Mar 7 03:17:21.758: INFO: Pod "pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee" satisfied condition "Succeeded or Failed" + Mar 7 03:17:21.760: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee container projected-secret-volume-test: + STEP: delete the pod 03/07/23 03:17:21.771 + Mar 7 03:17:21.789: INFO: Waiting for pod pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee to disappear + Mar 7 03:17:21.792: INFO: Pod pod-projected-secrets-cb0dd1d9-3aa1-49fd-a59c-7daa8433afee no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 + Mar 7 03:17:21.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-7880" for this suite. 03/07/23 03:17:21.795 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2157 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:17:21.801 +Mar 7 03:17:21.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:17:21.802 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:21.814 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:21.816 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2157 +STEP: creating service in namespace services-2979 03/07/23 03:17:21.818 +STEP: creating service affinity-clusterip in namespace services-2979 03/07/23 03:17:21.818 +STEP: creating replication controller affinity-clusterip in namespace services-2979 03/07/23 03:17:21.831 +I0307 03:17:21.843631 22 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-2979, replica count: 3 +I0307 03:17:24.894439 22 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 03:17:24.899: INFO: Creating new exec pod +Mar 7 03:17:24.915: INFO: Waiting up to 5m0s for pod "execpod-affinityghd5b" in namespace "services-2979" to be "running" +Mar 7 03:17:24.918: INFO: Pod "execpod-affinityghd5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.518071ms +Mar 7 03:17:26.920: INFO: Pod "execpod-affinityghd5b": Phase="Running", Reason="", readiness=true. Elapsed: 2.005185858s +Mar 7 03:17:26.920: INFO: Pod "execpod-affinityghd5b" satisfied condition "running" +Mar 7 03:17:27.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-2979 exec execpod-affinityghd5b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' +Mar 7 03:17:28.114: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Mar 7 03:17:28.114: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:17:28.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-2979 exec execpod-affinityghd5b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.106.94.230 80' +Mar 7 03:17:28.287: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.106.94.230 80\nConnection to 10.106.94.230 80 port [tcp/http] succeeded!\n" +Mar 7 03:17:28.287: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:17:28.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-2979 exec execpod-affinityghd5b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.94.230:80/ ; done' +Mar 7 03:17:28.532: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n" +Mar 7 03:17:28.532: INFO: stdout: "\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v" +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v +Mar 7 03:17:28.532: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-2979, will wait for the garbage collector to delete the pods 03/07/23 03:17:28.542 +Mar 7 03:17:28.602: INFO: Deleting ReplicationController affinity-clusterip took: 6.934741ms +Mar 7 03:17:28.703: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.783027ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:17:31.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2979" for this suite. 03/07/23 03:17:31.025 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","completed":101,"skipped":1781,"failed":0} +------------------------------ +• [SLOW TEST] [9.230 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2157 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:17:21.801 + Mar 7 03:17:21.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:17:21.802 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:21.814 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:21.816 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2157 + STEP: creating service in namespace services-2979 03/07/23 03:17:21.818 + STEP: creating service affinity-clusterip in namespace services-2979 03/07/23 03:17:21.818 + STEP: creating replication controller affinity-clusterip in namespace services-2979 03/07/23 03:17:21.831 + I0307 03:17:21.843631 22 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-2979, replica count: 3 + I0307 03:17:24.894439 22 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 03:17:24.899: INFO: Creating new exec pod + Mar 7 03:17:24.915: INFO: Waiting up to 5m0s for pod "execpod-affinityghd5b" in namespace "services-2979" to be "running" + Mar 7 03:17:24.918: INFO: Pod "execpod-affinityghd5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.518071ms + Mar 7 03:17:26.920: INFO: Pod "execpod-affinityghd5b": Phase="Running", Reason="", readiness=true. Elapsed: 2.005185858s + Mar 7 03:17:26.920: INFO: Pod "execpod-affinityghd5b" satisfied condition "running" + Mar 7 03:17:27.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-2979 exec execpod-affinityghd5b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' + Mar 7 03:17:28.114: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" + Mar 7 03:17:28.114: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:17:28.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-2979 exec execpod-affinityghd5b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.106.94.230 80' + Mar 7 03:17:28.287: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.106.94.230 80\nConnection to 10.106.94.230 80 port [tcp/http] succeeded!\n" + Mar 7 03:17:28.287: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:17:28.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-2979 exec execpod-affinityghd5b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.94.230:80/ ; done' + Mar 7 03:17:28.532: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.94.230:80/\n" + Mar 7 03:17:28.532: INFO: stdout: "\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v\naffinity-clusterip-qrr5v" + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Received response from host: affinity-clusterip-qrr5v + Mar 7 03:17:28.532: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-clusterip in namespace services-2979, will wait for the garbage collector to delete the pods 03/07/23 03:17:28.542 + Mar 7 03:17:28.602: INFO: Deleting ReplicationController affinity-clusterip took: 6.934741ms + Mar 7 03:17:28.703: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.783027ms + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:17:31.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-2979" for this suite. 03/07/23 03:17:31.025 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:343 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:17:31.032 +Mar 7 03:17:31.032: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 03:17:31.032 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:31.044 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:31.046 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 +[It] should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:343 +STEP: creating the pod 03/07/23 03:17:31.048 +STEP: submitting the pod to kubernetes 03/07/23 03:17:31.048 +Mar 7 03:17:31.055: INFO: Waiting up to 5m0s for pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08" in namespace "pods-3164" to be "running and ready" +Mar 7 03:17:31.057: INFO: Pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068259ms +Mar 7 03:17:31.057: INFO: The phase of Pod pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:17:33.061: INFO: Pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08": Phase="Running", Reason="", readiness=true. Elapsed: 2.006002337s +Mar 7 03:17:33.061: INFO: The phase of Pod pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08 is Running (Ready = true) +Mar 7 03:17:33.061: INFO: Pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08" satisfied condition "running and ready" +STEP: verifying the pod is in kubernetes 03/07/23 03:17:33.064 +STEP: updating the pod 03/07/23 03:17:33.066 +Mar 7 03:17:33.609: INFO: Successfully updated pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08" +Mar 7 03:17:33.610: INFO: Waiting up to 5m0s for pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08" in namespace "pods-3164" to be "running" +Mar 7 03:17:33.618: INFO: Pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08": Phase="Running", Reason="", readiness=true. Elapsed: 8.637264ms +Mar 7 03:17:33.618: INFO: Pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08" satisfied condition "running" +STEP: verifying the updated pod is in kubernetes 03/07/23 03:17:33.618 +Mar 7 03:17:33.624: INFO: Pod update OK +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +Mar 7 03:17:33.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-3164" for this suite. 03/07/23 03:17:33.627 +{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","completed":102,"skipped":1794,"failed":0} +------------------------------ +• [2.602 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:343 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:17:31.032 + Mar 7 03:17:31.032: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 03:17:31.032 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:31.044 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:31.046 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 + [It] should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:343 + STEP: creating the pod 03/07/23 03:17:31.048 + STEP: submitting the pod to kubernetes 03/07/23 03:17:31.048 + Mar 7 03:17:31.055: INFO: Waiting up to 5m0s for pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08" in namespace "pods-3164" to be "running and ready" + Mar 7 03:17:31.057: INFO: Pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068259ms + Mar 7 03:17:31.057: INFO: The phase of Pod pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:17:33.061: INFO: Pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08": Phase="Running", Reason="", readiness=true. Elapsed: 2.006002337s + Mar 7 03:17:33.061: INFO: The phase of Pod pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08 is Running (Ready = true) + Mar 7 03:17:33.061: INFO: Pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08" satisfied condition "running and ready" + STEP: verifying the pod is in kubernetes 03/07/23 03:17:33.064 + STEP: updating the pod 03/07/23 03:17:33.066 + Mar 7 03:17:33.609: INFO: Successfully updated pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08" + Mar 7 03:17:33.610: INFO: Waiting up to 5m0s for pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08" in namespace "pods-3164" to be "running" + Mar 7 03:17:33.618: INFO: Pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08": Phase="Running", Reason="", readiness=true. Elapsed: 8.637264ms + Mar 7 03:17:33.618: INFO: Pod "pod-update-5677f531-d093-462d-9c2c-7c1e7b86be08" satisfied condition "running" + STEP: verifying the updated pod is in kubernetes 03/07/23 03:17:33.618 + Mar 7 03:17:33.624: INFO: Pod update OK + [AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 + Mar 7 03:17:33.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-3164" for this suite. 03/07/23 03:17:33.627 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:86 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:17:33.634 +Mar 7 03:17:33.634: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename disruption 03/07/23 03:17:33.635 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:33.654 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:33.656 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:17:33.658 +Mar 7 03:17:33.658: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename disruption-2 03/07/23 03:17:33.659 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:33.673 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:33.675 +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:86 +STEP: Waiting for the pdb to be processed 03/07/23 03:17:33.681 +STEP: Waiting for the pdb to be processed 03/07/23 03:17:35.69 +STEP: Waiting for the pdb to be processed 03/07/23 03:17:37.7 +STEP: listing a collection of PDBs across all namespaces 03/07/23 03:17:39.728 +STEP: listing a collection of PDBs in namespace disruption-6358 03/07/23 03:17:39.731 +STEP: deleting a collection of PDBs 03/07/23 03:17:39.734 +STEP: Waiting for the PDB collection to be deleted 03/07/23 03:17:39.743 +[AfterEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/framework.go:187 +Mar 7 03:17:39.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2-7461" for this suite. 03/07/23 03:17:39.75 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 +Mar 7 03:17:39.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-6358" for this suite. 03/07/23 03:17:39.759 +{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","completed":103,"skipped":1805,"failed":0} +------------------------------ +• [SLOW TEST] [6.130 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + Listing PodDisruptionBudgets for all namespaces + test/e2e/apps/disruption.go:77 + should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:86 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:17:33.634 + Mar 7 03:17:33.634: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename disruption 03/07/23 03:17:33.635 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:33.654 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:33.656 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 + [BeforeEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:17:33.658 + Mar 7 03:17:33.658: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename disruption-2 03/07/23 03:17:33.659 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:33.673 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:33.675 + [It] should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:86 + STEP: Waiting for the pdb to be processed 03/07/23 03:17:33.681 + STEP: Waiting for the pdb to be processed 03/07/23 03:17:35.69 + STEP: Waiting for the pdb to be processed 03/07/23 03:17:37.7 + STEP: listing a collection of PDBs across all namespaces 03/07/23 03:17:39.728 + STEP: listing a collection of PDBs in namespace disruption-6358 03/07/23 03:17:39.731 + STEP: deleting a collection of PDBs 03/07/23 03:17:39.734 + STEP: Waiting for the PDB collection to be deleted 03/07/23 03:17:39.743 + [AfterEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/framework.go:187 + Mar 7 03:17:39.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "disruption-2-7461" for this suite. 03/07/23 03:17:39.75 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 + Mar 7 03:17:39.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "disruption-6358" for this suite. 03/07/23 03:17:39.759 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:543 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:17:39.764 +Mar 7 03:17:39.765: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sched-preemption 03/07/23 03:17:39.765 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:39.778 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:39.78 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 +Mar 7 03:17:39.791: INFO: Waiting up to 1m0s for all nodes to be ready +Mar 7 03:18:39.871: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:18:39.873 +Mar 7 03:18:39.874: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sched-preemption-path 03/07/23 03:18:39.874 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:18:39.921 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:18:39.923 +[BeforeEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:496 +STEP: Finding an available node 03/07/23 03:18:39.925 +STEP: Trying to launch a pod without a label to get a node which can launch it. 03/07/23 03:18:39.925 +Mar 7 03:18:39.931: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-2463" to be "running" +Mar 7 03:18:39.933: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 1.815598ms +Mar 7 03:18:41.936: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.005004246s +Mar 7 03:18:41.936: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 03/07/23 03:18:41.938 +Mar 7 03:18:41.945: INFO: found a healthy node: node-2 +[It] runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:543 +Mar 7 03:18:54.005: INFO: pods created so far: [1 1 1] +Mar 7 03:18:54.005: INFO: length of pods created so far: 3 +Mar 7 03:18:58.014: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + test/e2e/framework/framework.go:187 +Mar 7 03:19:05.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-2463" for this suite. 03/07/23 03:19:05.019 +[AfterEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:470 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:19:05.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-3905" for this suite. 03/07/23 03:19:05.074 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","completed":104,"skipped":1806,"failed":0} +------------------------------ +• [SLOW TEST] [85.358 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + PreemptionExecutionPath + test/e2e/scheduling/preemption.go:458 + runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:543 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:17:39.764 + Mar 7 03:17:39.765: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sched-preemption 03/07/23 03:17:39.765 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:17:39.778 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:17:39.78 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 + Mar 7 03:17:39.791: INFO: Waiting up to 1m0s for all nodes to be ready + Mar 7 03:18:39.871: INFO: Waiting for terminating namespaces to be deleted... + [BeforeEach] PreemptionExecutionPath + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:18:39.873 + Mar 7 03:18:39.874: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sched-preemption-path 03/07/23 03:18:39.874 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:18:39.921 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:18:39.923 + [BeforeEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:496 + STEP: Finding an available node 03/07/23 03:18:39.925 + STEP: Trying to launch a pod without a label to get a node which can launch it. 03/07/23 03:18:39.925 + Mar 7 03:18:39.931: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-2463" to be "running" + Mar 7 03:18:39.933: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 1.815598ms + Mar 7 03:18:41.936: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.005004246s + Mar 7 03:18:41.936: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 03/07/23 03:18:41.938 + Mar 7 03:18:41.945: INFO: found a healthy node: node-2 + [It] runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:543 + Mar 7 03:18:54.005: INFO: pods created so far: [1 1 1] + Mar 7 03:18:54.005: INFO: length of pods created so far: 3 + Mar 7 03:18:58.014: INFO: pods created so far: [2 2 1] + [AfterEach] PreemptionExecutionPath + test/e2e/framework/framework.go:187 + Mar 7 03:19:05.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sched-preemption-path-2463" for this suite. 03/07/23 03:19:05.019 + [AfterEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:470 + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:19:05.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sched-preemption-3905" for this suite. 03/07/23 03:19:05.074 + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + test/e2e/apps/job.go:309 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:19:05.123 +Mar 7 03:19:05.124: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename job 03/07/23 03:19:05.125 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:19:05.139 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:19:05.141 +[It] should delete a job [Conformance] + test/e2e/apps/job.go:309 +STEP: Creating a job 03/07/23 03:19:05.142 +STEP: Ensuring active pods == parallelism 03/07/23 03:19:05.147 +STEP: delete a job 03/07/23 03:19:07.15 +STEP: deleting Job.batch foo in namespace job-8145, will wait for the garbage collector to delete the pods 03/07/23 03:19:07.15 +Mar 7 03:19:07.231: INFO: Deleting Job.batch foo took: 27.381265ms +Mar 7 03:19:07.332: INFO: Terminating Job.batch foo pods took: 100.919918ms +STEP: Ensuring job was deleted 03/07/23 03:19:39.432 +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +Mar 7 03:19:39.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-8145" for this suite. 03/07/23 03:19:39.438 +{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","completed":105,"skipped":1820,"failed":0} +------------------------------ +• [SLOW TEST] [34.320 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should delete a job [Conformance] + test/e2e/apps/job.go:309 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:19:05.123 + Mar 7 03:19:05.124: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename job 03/07/23 03:19:05.125 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:19:05.139 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:19:05.141 + [It] should delete a job [Conformance] + test/e2e/apps/job.go:309 + STEP: Creating a job 03/07/23 03:19:05.142 + STEP: Ensuring active pods == parallelism 03/07/23 03:19:05.147 + STEP: delete a job 03/07/23 03:19:07.15 + STEP: deleting Job.batch foo in namespace job-8145, will wait for the garbage collector to delete the pods 03/07/23 03:19:07.15 + Mar 7 03:19:07.231: INFO: Deleting Job.batch foo took: 27.381265ms + Mar 7 03:19:07.332: INFO: Terminating Job.batch foo pods took: 100.919918ms + STEP: Ensuring job was deleted 03/07/23 03:19:39.432 + [AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 + Mar 7 03:19:39.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "job-8145" for this suite. 03/07/23 03:19:39.438 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:356 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:19:39.444 +Mar 7 03:19:39.444: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:19:39.445 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:19:39.456 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:19:39.458 +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:356 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation 03/07/23 03:19:39.46 +Mar 7 03:19:39.460: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:19:45.118: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:20:04.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-8208" for this suite. 03/07/23 03:20:04.194 +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","completed":106,"skipped":1833,"failed":0} +------------------------------ +• [SLOW TEST] [24.774 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:356 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:19:39.444 + Mar 7 03:19:39.444: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:19:39.445 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:19:39.456 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:19:39.458 + [It] works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:356 + STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation 03/07/23 03:19:39.46 + Mar 7 03:19:39.460: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:19:45.118: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:20:04.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-publish-openapi-8208" for this suite. 03/07/23 03:20:04.194 + << End Captured GinkgoWriter Output +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:235 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:20:04.219 +Mar 7 03:20:04.219: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:20:04.22 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:04.236 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:04.238 +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:235 +Mar 7 03:20:04.240: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 03/07/23 03:20:09.355 +Mar 7 03:20:09.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-6247 --namespace=crd-publish-openapi-6247 create -f -' +Mar 7 03:20:10.282: INFO: stderr: "" +Mar 7 03:20:10.282: INFO: stdout: "e2e-test-crd-publish-openapi-4888-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Mar 7 03:20:10.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-6247 --namespace=crd-publish-openapi-6247 delete e2e-test-crd-publish-openapi-4888-crds test-cr' +Mar 7 03:20:10.499: INFO: stderr: "" +Mar 7 03:20:10.499: INFO: stdout: "e2e-test-crd-publish-openapi-4888-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Mar 7 03:20:10.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-6247 --namespace=crd-publish-openapi-6247 apply -f -' +Mar 7 03:20:11.265: INFO: stderr: "" +Mar 7 03:20:11.265: INFO: stdout: "e2e-test-crd-publish-openapi-4888-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Mar 7 03:20:11.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-6247 --namespace=crd-publish-openapi-6247 delete e2e-test-crd-publish-openapi-4888-crds test-cr' +Mar 7 03:20:11.427: INFO: stderr: "" +Mar 7 03:20:11.427: INFO: stdout: "e2e-test-crd-publish-openapi-4888-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR 03/07/23 03:20:11.427 +Mar 7 03:20:11.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-6247 explain e2e-test-crd-publish-openapi-4888-crds' +Mar 7 03:20:12.243: INFO: stderr: "" +Mar 7 03:20:12.243: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4888-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:20:17.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-6247" for this suite. 03/07/23 03:20:17.23 +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","completed":107,"skipped":1833,"failed":0} +------------------------------ +• [SLOW TEST] [13.015 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:235 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:20:04.219 + Mar 7 03:20:04.219: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:20:04.22 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:04.236 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:04.238 + [It] works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:235 + Mar 7 03:20:04.240: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 03/07/23 03:20:09.355 + Mar 7 03:20:09.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-6247 --namespace=crd-publish-openapi-6247 create -f -' + Mar 7 03:20:10.282: INFO: stderr: "" + Mar 7 03:20:10.282: INFO: stdout: "e2e-test-crd-publish-openapi-4888-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" + Mar 7 03:20:10.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-6247 --namespace=crd-publish-openapi-6247 delete e2e-test-crd-publish-openapi-4888-crds test-cr' + Mar 7 03:20:10.499: INFO: stderr: "" + Mar 7 03:20:10.499: INFO: stdout: "e2e-test-crd-publish-openapi-4888-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" + Mar 7 03:20:10.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-6247 --namespace=crd-publish-openapi-6247 apply -f -' + Mar 7 03:20:11.265: INFO: stderr: "" + Mar 7 03:20:11.265: INFO: stdout: "e2e-test-crd-publish-openapi-4888-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" + Mar 7 03:20:11.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-6247 --namespace=crd-publish-openapi-6247 delete e2e-test-crd-publish-openapi-4888-crds test-cr' + Mar 7 03:20:11.427: INFO: stderr: "" + Mar 7 03:20:11.427: INFO: stdout: "e2e-test-crd-publish-openapi-4888-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR 03/07/23 03:20:11.427 + Mar 7 03:20:11.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-6247 explain e2e-test-crd-publish-openapi-4888-crds' + Mar 7 03:20:12.243: INFO: stderr: "" + Mar 7 03:20:12.243: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4888-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:20:17.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-publish-openapi-6247" for this suite. 03/07/23 03:20:17.23 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:822 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:20:17.235 +Mar 7 03:20:17.235: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:20:17.236 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:17.249 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:17.252 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:822 +STEP: validating api versions 03/07/23 03:20:17.254 +Mar 7 03:20:17.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-4959 api-versions' +Mar 7 03:20:17.303: INFO: stderr: "" +Mar 7 03:20:17.303: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta2\nbatch/v1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ncustom.metrics.k8s.io/v1beta1\ndex.coreos.com/v1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nmetalk8s.scality.com/v1alpha1\nmetrics.k8s.io/v1beta1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\npolicy/v1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nstorage.metalk8s.scality.com/v1alpha1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:20:17.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4959" for this suite. 03/07/23 03:20:17.307 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","completed":108,"skipped":1837,"failed":0} +------------------------------ +• [0.078 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl api-versions + test/e2e/kubectl/kubectl.go:816 + should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:822 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:20:17.235 + Mar 7 03:20:17.235: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:20:17.236 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:17.249 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:17.252 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:822 + STEP: validating api versions 03/07/23 03:20:17.254 + Mar 7 03:20:17.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-4959 api-versions' + Mar 7 03:20:17.303: INFO: stderr: "" + Mar 7 03:20:17.303: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta2\nbatch/v1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ncustom.metrics.k8s.io/v1beta1\ndex.coreos.com/v1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nmetalk8s.scality.com/v1alpha1\nmetrics.k8s.io/v1beta1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\npolicy/v1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nstorage.metalk8s.scality.com/v1alpha1\nv1\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:20:17.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-4959" for this suite. 03/07/23 03:20:17.307 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:307 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:20:17.315 +Mar 7 03:20:17.315: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:20:17.316 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:17.33 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:17.332 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:20:17.348 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:20:17.994 +STEP: Deploying the webhook pod 03/07/23 03:20:18 +STEP: Wait for the deployment to be ready 03/07/23 03:20:18.01 +Mar 7 03:20:18.015: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 03/07/23 03:20:20.022 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:20:20.035 +Mar 7 03:20:21.035: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:307 +STEP: Registering the crd webhook via the AdmissionRegistration API 03/07/23 03:20:21.038 +STEP: Creating a custom resource definition that should be denied by the webhook 03/07/23 03:20:21.078 +Mar 7 03:20:21.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:20:21.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-844" for this suite. 03/07/23 03:20:21.094 +STEP: Destroying namespace "webhook-844-markers" for this suite. 03/07/23 03:20:21.099 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","completed":109,"skipped":1883,"failed":0} +------------------------------ +• [3.843 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:307 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:20:17.315 + Mar 7 03:20:17.315: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:20:17.316 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:17.33 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:17.332 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:20:17.348 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:20:17.994 + STEP: Deploying the webhook pod 03/07/23 03:20:18 + STEP: Wait for the deployment to be ready 03/07/23 03:20:18.01 + Mar 7 03:20:18.015: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 03/07/23 03:20:20.022 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:20:20.035 + Mar 7 03:20:21.035: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:307 + STEP: Registering the crd webhook via the AdmissionRegistration API 03/07/23 03:20:21.038 + STEP: Creating a custom resource definition that should be denied by the webhook 03/07/23 03:20:21.078 + Mar 7 03:20:21.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:20:21.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-844" for this suite. 03/07/23 03:20:21.094 + STEP: Destroying namespace "webhook-844-markers" for this suite. 03/07/23 03:20:21.099 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:46 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:20:21.159 +Mar 7 03:20:21.159: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:20:21.16 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:21.193 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:21.196 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:46 +STEP: Creating configMap with name configmap-test-volume-cf2c1b3a-38a6-4f53-9444-1b1ff85701af 03/07/23 03:20:21.198 +STEP: Creating a pod to test consume configMaps 03/07/23 03:20:21.202 +Mar 7 03:20:21.214: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3" in namespace "configmap-4763" to be "Succeeded or Failed" +Mar 7 03:20:21.219: INFO: Pod "pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.985773ms +Mar 7 03:20:23.222: INFO: Pod "pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008165006s +Mar 7 03:20:25.223: INFO: Pod "pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008571218s +STEP: Saw pod success 03/07/23 03:20:25.223 +Mar 7 03:20:25.223: INFO: Pod "pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3" satisfied condition "Succeeded or Failed" +Mar 7 03:20:25.225: INFO: Trying to get logs from node node-2 pod pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3 container agnhost-container: +STEP: delete the pod 03/07/23 03:20:25.24 +Mar 7 03:20:25.251: INFO: Waiting for pod pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3 to disappear +Mar 7 03:20:25.253: INFO: Pod pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:20:25.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4763" for this suite. 03/07/23 03:20:25.256 +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","completed":110,"skipped":1899,"failed":0} +------------------------------ +• [4.101 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:46 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:20:21.159 + Mar 7 03:20:21.159: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:20:21.16 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:21.193 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:21.196 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:46 + STEP: Creating configMap with name configmap-test-volume-cf2c1b3a-38a6-4f53-9444-1b1ff85701af 03/07/23 03:20:21.198 + STEP: Creating a pod to test consume configMaps 03/07/23 03:20:21.202 + Mar 7 03:20:21.214: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3" in namespace "configmap-4763" to be "Succeeded or Failed" + Mar 7 03:20:21.219: INFO: Pod "pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.985773ms + Mar 7 03:20:23.222: INFO: Pod "pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008165006s + Mar 7 03:20:25.223: INFO: Pod "pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008571218s + STEP: Saw pod success 03/07/23 03:20:25.223 + Mar 7 03:20:25.223: INFO: Pod "pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3" satisfied condition "Succeeded or Failed" + Mar 7 03:20:25.225: INFO: Trying to get logs from node node-2 pod pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3 container agnhost-container: + STEP: delete the pod 03/07/23 03:20:25.24 + Mar 7 03:20:25.251: INFO: Waiting for pod pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3 to disappear + Mar 7 03:20:25.253: INFO: Pod pod-configmaps-5cae2c8f-55fd-42a7-971f-e712717643d3 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:20:25.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-4763" for this suite. 03/07/23 03:20:25.256 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:194 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:20:25.261 +Mar 7 03:20:25.261: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-runtime 03/07/23 03:20:25.262 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:25.273 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:25.275 +[It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:194 +STEP: create the container 03/07/23 03:20:25.277 +STEP: wait for the container to reach Succeeded 03/07/23 03:20:25.283 +STEP: get the container status 03/07/23 03:20:29.299 +STEP: the container should be terminated 03/07/23 03:20:29.322 +STEP: the termination message should be set 03/07/23 03:20:29.322 +Mar 7 03:20:29.322: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container 03/07/23 03:20:29.322 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 +Mar 7 03:20:29.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-513" for this suite. 03/07/23 03:20:29.336 +{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","completed":111,"skipped":1901,"failed":0} +------------------------------ +• [4.079 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:43 + on terminated container + test/e2e/common/node/runtime.go:136 + should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:194 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:20:25.261 + Mar 7 03:20:25.261: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-runtime 03/07/23 03:20:25.262 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:25.273 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:25.275 + [It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:194 + STEP: create the container 03/07/23 03:20:25.277 + STEP: wait for the container to reach Succeeded 03/07/23 03:20:25.283 + STEP: get the container status 03/07/23 03:20:29.299 + STEP: the container should be terminated 03/07/23 03:20:29.322 + STEP: the termination message should be set 03/07/23 03:20:29.322 + Mar 7 03:20:29.322: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- + STEP: delete the container 03/07/23 03:20:29.322 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 + Mar 7 03:20:29.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-runtime-513" for this suite. 03/07/23 03:20:29.336 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:461 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:20:29.341 +Mar 7 03:20:29.341: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sched-pred 03/07/23 03:20:29.342 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:29.359 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:29.362 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 +Mar 7 03:20:29.364: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Mar 7 03:20:29.376: INFO: Waiting for terminating namespaces to be deleted... +Mar 7 03:20:29.379: INFO: +Logging pods the apiserver thinks is on node bootstrap before test +Mar 7 03:20:29.392: INFO: apiserver-proxy-bootstrap from kube-system started at 2023-03-07 00:42:52 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container nginx ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: backup-747d8c577b-wdcvl from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container backup ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: backup-replication-wkdpp-lt4dt from kube-system started at 2023-03-07 00:47:50 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container backup-replication ready: false, restart count 0 +Mar 7 03:20:29.392: INFO: calico-kube-controllers-59685599d8-pvn74 from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container calico-kube-controllers ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: calico-node-mlncm from kube-system started at 2023-03-07 02:23:53 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container calico-node ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: coredns-5d7b997fcf-2j4jw from kube-system started at 2023-03-07 02:57:39 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container coredns ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: etcd-bootstrap from kube-system started at 2023-03-07 00:43:13 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container etcd ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: kube-apiserver-bootstrap from kube-system started at 2023-03-07 00:43:25 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: kube-controller-manager-bootstrap from kube-system started at 2023-03-07 00:43:33 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container kube-controller-manager ready: true, restart count 4 +Mar 7 03:20:29.392: INFO: kube-proxy-nlf5t from kube-system started at 2023-03-07 02:23:30 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: kube-scheduler-bootstrap from kube-system started at 2023-03-07 00:43:34 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container kube-scheduler ready: true, restart count 3 +Mar 7 03:20:29.392: INFO: metalk8s-operator-controller-manager-7d4764b947-crj2f from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container manager ready: true, restart count 5 +Mar 7 03:20:29.392: INFO: repositories-bootstrap from kube-system started at 2023-03-07 02:07:15 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container repositories ready: true, restart count 1 +Mar 7 03:20:29.392: INFO: salt-master-bootstrap from kube-system started at 2023-03-07 00:42:29 +0000 UTC (2 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container salt-api ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: Container salt-master ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: storage-operator-78f5dcc84f-jwnzl from kube-system started at 2023-03-07 00:45:28 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container manager ready: true, restart count 4 +Mar 7 03:20:29.392: INFO: dex-57f9db7c4-hbrhr from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container dex ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: dex-57f9db7c4-z6gh6 from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container dex ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: ingress-control-plane-managed-vip-n2qb6 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container keepalived ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: ingress-nginx-control-plane-controller-j9hsf from metalk8s-ingress started at 2023-03-07 00:45:27 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container controller ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: ingress-nginx-controller-vjnvw from metalk8s-ingress started at 2023-03-07 02:10:07 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container controller ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: ingress-nginx-defaultbackend-75c64bd745-65gwj from metalk8s-ingress started at 2023-03-07 00:45:24 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container ingress-nginx-default-backend ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: fluent-bit-dzhms from metalk8s-logging started at 2023-03-07 00:45:38 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: metalk8s-alert-logger-84f87c86d-hflm5 from metalk8s-monitoring started at 2023-03-07 00:45:09 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container metalk8s-alert-logger ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: prometheus-adapter-6696954b59-qrxtn from metalk8s-monitoring started at 2023-03-07 00:45:34 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container prometheus-adapter ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: prometheus-operator-kube-state-metrics-f7d5dc499-t4szw from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container kube-state-metrics ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: prometheus-operator-operator-864bc5b5d-8m6lq from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container prometheus-operator ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: prometheus-operator-prometheus-node-exporter-sl4bq from metalk8s-monitoring started at 2023-03-07 00:45:18 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: thanos-query-6b9dc579dd-ctlrl from metalk8s-monitoring started at 2023-03-07 00:45:22 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container thanos-query ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: metalk8s-ui-766c8b96cd-8cxcs from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container metalk8s-ui ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: metalk8s-ui-766c8b96cd-tsx5v from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container metalk8s-ui ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:20:29.392: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: Container systemd-logs ready: true, restart count 0 +Mar 7 03:20:29.392: INFO: +Logging pods the apiserver thinks is on node node-1 before test +Mar 7 03:20:29.403: INFO: apiserver-proxy-node-1 from kube-system started at 2023-03-07 00:58:52 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.403: INFO: Container nginx ready: true, restart count 0 +Mar 7 03:20:29.403: INFO: calico-node-fvlp2 from kube-system started at 2023-03-07 02:23:42 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.403: INFO: Container calico-node ready: true, restart count 0 +Mar 7 03:20:29.403: INFO: coredns-5d7b997fcf-z25jb from kube-system started at 2023-03-07 02:09:04 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.403: INFO: Container coredns ready: true, restart count 0 +Mar 7 03:20:29.403: INFO: etcd-node-1 from kube-system started at 2023-03-07 00:59:16 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.403: INFO: Container etcd ready: true, restart count 1 +Mar 7 03:20:29.403: INFO: kube-apiserver-node-1 from kube-system started at 2023-03-07 01:00:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: kube-controller-manager-node-1 from kube-system started at 2023-03-07 01:00:17 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container kube-controller-manager ready: true, restart count 2 +Mar 7 03:20:29.404: INFO: kube-proxy-vpgsc from kube-system started at 2023-03-07 02:23:27 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: kube-scheduler-node-1 from kube-system started at 2023-03-07 01:00:18 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container kube-scheduler ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: ingress-control-plane-managed-vip-w2cb9 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container keepalived ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: ingress-nginx-control-plane-controller-ck4wk from metalk8s-ingress started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container controller ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: ingress-nginx-controller-9b2bj from metalk8s-ingress started at 2023-03-07 02:10:40 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container controller ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: fluent-bit-4nw7s from metalk8s-logging started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: loki-0 from metalk8s-logging started at 2023-03-07 01:11:45 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container single-binary ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: alertmanager-prometheus-operator-alertmanager-0 from metalk8s-monitoring started at 2023-03-07 01:11:00 +0000 UTC (2 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container alertmanager ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: Container config-reloader ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: prometheus-operator-grafana-74d86d5965-nj6pq from metalk8s-monitoring started at 2023-03-07 02:57:39 +0000 UTC (3 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container grafana ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: Container grafana-sc-dashboard ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: Container grafana-sc-datasources ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: prometheus-operator-prometheus-node-exporter-4plkr from metalk8s-monitoring started at 2023-03-07 00:58:56 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: prometheus-prometheus-operator-prometheus-0 from metalk8s-monitoring started at 2023-03-07 01:11:10 +0000 UTC (3 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container config-reloader ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: Container prometheus ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: Container thanos-sidecar ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-gktsb from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:20:29.404: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: Container systemd-logs ready: true, restart count 0 +Mar 7 03:20:29.404: INFO: +Logging pods the apiserver thinks is on node node-2 before test +Mar 7 03:20:29.416: INFO: apiserver-proxy-node-2 from kube-system started at 2023-03-07 01:07:13 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container nginx ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: calico-node-r7qqp from kube-system started at 2023-03-07 02:23:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container calico-node ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: etcd-node-2 from kube-system started at 2023-03-07 01:08:10 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container etcd ready: true, restart count 2 +Mar 7 03:20:29.416: INFO: kube-apiserver-node-2 from kube-system started at 2023-03-07 01:09:12 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: kube-controller-manager-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container kube-controller-manager ready: true, restart count 1 +Mar 7 03:20:29.416: INFO: kube-proxy-wsc86 from kube-system started at 2023-03-07 02:23:33 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: kube-scheduler-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container kube-scheduler ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: ingress-control-plane-managed-vip-qxwrw from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container keepalived ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: ingress-nginx-control-plane-controller-crbv2 from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container controller ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: ingress-nginx-controller-bcd78 from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container controller ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: fluent-bit-tn4sc from metalk8s-logging started at 2023-03-07 02:58:10 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: prometheus-operator-prometheus-node-exporter-x9hfs from metalk8s-monitoring started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: sonobuoy from sonobuoy started at 2023-03-07 02:24:57 +0000 UTC (1 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container kube-sonobuoy ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: sonobuoy-e2e-job-441ced38a9a5443b from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container e2e ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-hbmvq from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:20:29.416: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:20:29.416: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:461 +STEP: Trying to launch a pod without a label to get a node which can launch it. 03/07/23 03:20:29.416 +Mar 7 03:20:29.423: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6165" to be "running" +Mar 7 03:20:29.425: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 1.944298ms +Mar 7 03:20:31.428: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.005359128s +Mar 7 03:20:31.428: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 03/07/23 03:20:31.43 +STEP: Trying to apply a random label on the found node. 03/07/23 03:20:31.449 +STEP: verifying the node has the label kubernetes.io/e2e-41989072-a3c8-4f15-86c6-16c696543ce3 42 03/07/23 03:20:31.463 +STEP: Trying to relaunch the pod, now with labels. 03/07/23 03:20:31.467 +Mar 7 03:20:31.474: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-6165" to be "not pending" +Mar 7 03:20:31.478: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 4.505038ms +Mar 7 03:20:33.482: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.008244221s +Mar 7 03:20:33.482: INFO: Pod "with-labels" satisfied condition "not pending" +STEP: removing the label kubernetes.io/e2e-41989072-a3c8-4f15-86c6-16c696543ce3 off the node node-2 03/07/23 03:20:33.484 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-41989072-a3c8-4f15-86c6-16c696543ce3 03/07/23 03:20:33.494 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:20:33.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-6165" for this suite. 03/07/23 03:20:33.501 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","completed":112,"skipped":1911,"failed":0} +------------------------------ +• [4.167 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:461 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:20:29.341 + Mar 7 03:20:29.341: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sched-pred 03/07/23 03:20:29.342 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:29.359 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:29.362 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 + Mar 7 03:20:29.364: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Mar 7 03:20:29.376: INFO: Waiting for terminating namespaces to be deleted... + Mar 7 03:20:29.379: INFO: + Logging pods the apiserver thinks is on node bootstrap before test + Mar 7 03:20:29.392: INFO: apiserver-proxy-bootstrap from kube-system started at 2023-03-07 00:42:52 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container nginx ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: backup-747d8c577b-wdcvl from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container backup ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: backup-replication-wkdpp-lt4dt from kube-system started at 2023-03-07 00:47:50 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container backup-replication ready: false, restart count 0 + Mar 7 03:20:29.392: INFO: calico-kube-controllers-59685599d8-pvn74 from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container calico-kube-controllers ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: calico-node-mlncm from kube-system started at 2023-03-07 02:23:53 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container calico-node ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: coredns-5d7b997fcf-2j4jw from kube-system started at 2023-03-07 02:57:39 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container coredns ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: etcd-bootstrap from kube-system started at 2023-03-07 00:43:13 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container etcd ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: kube-apiserver-bootstrap from kube-system started at 2023-03-07 00:43:25 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: kube-controller-manager-bootstrap from kube-system started at 2023-03-07 00:43:33 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container kube-controller-manager ready: true, restart count 4 + Mar 7 03:20:29.392: INFO: kube-proxy-nlf5t from kube-system started at 2023-03-07 02:23:30 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: kube-scheduler-bootstrap from kube-system started at 2023-03-07 00:43:34 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container kube-scheduler ready: true, restart count 3 + Mar 7 03:20:29.392: INFO: metalk8s-operator-controller-manager-7d4764b947-crj2f from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container manager ready: true, restart count 5 + Mar 7 03:20:29.392: INFO: repositories-bootstrap from kube-system started at 2023-03-07 02:07:15 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container repositories ready: true, restart count 1 + Mar 7 03:20:29.392: INFO: salt-master-bootstrap from kube-system started at 2023-03-07 00:42:29 +0000 UTC (2 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container salt-api ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: Container salt-master ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: storage-operator-78f5dcc84f-jwnzl from kube-system started at 2023-03-07 00:45:28 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container manager ready: true, restart count 4 + Mar 7 03:20:29.392: INFO: dex-57f9db7c4-hbrhr from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container dex ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: dex-57f9db7c4-z6gh6 from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container dex ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: ingress-control-plane-managed-vip-n2qb6 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container keepalived ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: ingress-nginx-control-plane-controller-j9hsf from metalk8s-ingress started at 2023-03-07 00:45:27 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container controller ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: ingress-nginx-controller-vjnvw from metalk8s-ingress started at 2023-03-07 02:10:07 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container controller ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: ingress-nginx-defaultbackend-75c64bd745-65gwj from metalk8s-ingress started at 2023-03-07 00:45:24 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container ingress-nginx-default-backend ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: fluent-bit-dzhms from metalk8s-logging started at 2023-03-07 00:45:38 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: metalk8s-alert-logger-84f87c86d-hflm5 from metalk8s-monitoring started at 2023-03-07 00:45:09 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container metalk8s-alert-logger ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: prometheus-adapter-6696954b59-qrxtn from metalk8s-monitoring started at 2023-03-07 00:45:34 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container prometheus-adapter ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: prometheus-operator-kube-state-metrics-f7d5dc499-t4szw from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container kube-state-metrics ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: prometheus-operator-operator-864bc5b5d-8m6lq from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container prometheus-operator ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: prometheus-operator-prometheus-node-exporter-sl4bq from metalk8s-monitoring started at 2023-03-07 00:45:18 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: thanos-query-6b9dc579dd-ctlrl from metalk8s-monitoring started at 2023-03-07 00:45:22 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container thanos-query ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: metalk8s-ui-766c8b96cd-8cxcs from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container metalk8s-ui ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: metalk8s-ui-766c8b96cd-tsx5v from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container metalk8s-ui ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:20:29.392: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: Container systemd-logs ready: true, restart count 0 + Mar 7 03:20:29.392: INFO: + Logging pods the apiserver thinks is on node node-1 before test + Mar 7 03:20:29.403: INFO: apiserver-proxy-node-1 from kube-system started at 2023-03-07 00:58:52 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.403: INFO: Container nginx ready: true, restart count 0 + Mar 7 03:20:29.403: INFO: calico-node-fvlp2 from kube-system started at 2023-03-07 02:23:42 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.403: INFO: Container calico-node ready: true, restart count 0 + Mar 7 03:20:29.403: INFO: coredns-5d7b997fcf-z25jb from kube-system started at 2023-03-07 02:09:04 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.403: INFO: Container coredns ready: true, restart count 0 + Mar 7 03:20:29.403: INFO: etcd-node-1 from kube-system started at 2023-03-07 00:59:16 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.403: INFO: Container etcd ready: true, restart count 1 + Mar 7 03:20:29.403: INFO: kube-apiserver-node-1 from kube-system started at 2023-03-07 01:00:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: kube-controller-manager-node-1 from kube-system started at 2023-03-07 01:00:17 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container kube-controller-manager ready: true, restart count 2 + Mar 7 03:20:29.404: INFO: kube-proxy-vpgsc from kube-system started at 2023-03-07 02:23:27 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: kube-scheduler-node-1 from kube-system started at 2023-03-07 01:00:18 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container kube-scheduler ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: ingress-control-plane-managed-vip-w2cb9 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container keepalived ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: ingress-nginx-control-plane-controller-ck4wk from metalk8s-ingress started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container controller ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: ingress-nginx-controller-9b2bj from metalk8s-ingress started at 2023-03-07 02:10:40 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container controller ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: fluent-bit-4nw7s from metalk8s-logging started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: loki-0 from metalk8s-logging started at 2023-03-07 01:11:45 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container single-binary ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: alertmanager-prometheus-operator-alertmanager-0 from metalk8s-monitoring started at 2023-03-07 01:11:00 +0000 UTC (2 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container alertmanager ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: Container config-reloader ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: prometheus-operator-grafana-74d86d5965-nj6pq from metalk8s-monitoring started at 2023-03-07 02:57:39 +0000 UTC (3 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container grafana ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: Container grafana-sc-dashboard ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: Container grafana-sc-datasources ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: prometheus-operator-prometheus-node-exporter-4plkr from metalk8s-monitoring started at 2023-03-07 00:58:56 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: prometheus-prometheus-operator-prometheus-0 from metalk8s-monitoring started at 2023-03-07 01:11:10 +0000 UTC (3 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container config-reloader ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: Container prometheus ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: Container thanos-sidecar ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-gktsb from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:20:29.404: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: Container systemd-logs ready: true, restart count 0 + Mar 7 03:20:29.404: INFO: + Logging pods the apiserver thinks is on node node-2 before test + Mar 7 03:20:29.416: INFO: apiserver-proxy-node-2 from kube-system started at 2023-03-07 01:07:13 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container nginx ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: calico-node-r7qqp from kube-system started at 2023-03-07 02:23:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container calico-node ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: etcd-node-2 from kube-system started at 2023-03-07 01:08:10 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container etcd ready: true, restart count 2 + Mar 7 03:20:29.416: INFO: kube-apiserver-node-2 from kube-system started at 2023-03-07 01:09:12 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: kube-controller-manager-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container kube-controller-manager ready: true, restart count 1 + Mar 7 03:20:29.416: INFO: kube-proxy-wsc86 from kube-system started at 2023-03-07 02:23:33 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: kube-scheduler-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container kube-scheduler ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: ingress-control-plane-managed-vip-qxwrw from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container keepalived ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: ingress-nginx-control-plane-controller-crbv2 from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container controller ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: ingress-nginx-controller-bcd78 from metalk8s-ingress started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container controller ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: fluent-bit-tn4sc from metalk8s-logging started at 2023-03-07 02:58:10 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: prometheus-operator-prometheus-node-exporter-x9hfs from metalk8s-monitoring started at 2023-03-07 02:58:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: sonobuoy from sonobuoy started at 2023-03-07 02:24:57 +0000 UTC (1 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container kube-sonobuoy ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: sonobuoy-e2e-job-441ced38a9a5443b from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container e2e ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-hbmvq from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:20:29.416: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:20:29.416: INFO: Container systemd-logs ready: true, restart count 0 + [It] validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:461 + STEP: Trying to launch a pod without a label to get a node which can launch it. 03/07/23 03:20:29.416 + Mar 7 03:20:29.423: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6165" to be "running" + Mar 7 03:20:29.425: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 1.944298ms + Mar 7 03:20:31.428: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.005359128s + Mar 7 03:20:31.428: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 03/07/23 03:20:31.43 + STEP: Trying to apply a random label on the found node. 03/07/23 03:20:31.449 + STEP: verifying the node has the label kubernetes.io/e2e-41989072-a3c8-4f15-86c6-16c696543ce3 42 03/07/23 03:20:31.463 + STEP: Trying to relaunch the pod, now with labels. 03/07/23 03:20:31.467 + Mar 7 03:20:31.474: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-6165" to be "not pending" + Mar 7 03:20:31.478: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 4.505038ms + Mar 7 03:20:33.482: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.008244221s + Mar 7 03:20:33.482: INFO: Pod "with-labels" satisfied condition "not pending" + STEP: removing the label kubernetes.io/e2e-41989072-a3c8-4f15-86c6-16c696543ce3 off the node node-2 03/07/23 03:20:33.484 + STEP: verifying the node doesn't have the label kubernetes.io/e2e-41989072-a3c8-4f15-86c6-16c696543ce3 03/07/23 03:20:33.494 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:20:33.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sched-pred-6165" for this suite. 03/07/23 03:20:33.501 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:20:33.509 +Mar 7 03:20:33.509: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename events 03/07/23 03:20:33.51 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:33.528 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:33.53 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 +[It] should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 +STEP: Create set of events 03/07/23 03:20:33.532 +STEP: get a list of Events with a label in the current namespace 03/07/23 03:20:33.546 +STEP: delete a list of events 03/07/23 03:20:33.549 +Mar 7 03:20:33.549: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity 03/07/23 03:20:33.562 +[AfterEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:187 +Mar 7 03:20:33.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-2019" for this suite. 03/07/23 03:20:33.567 +{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","completed":113,"skipped":1958,"failed":0} +------------------------------ +• [0.062 seconds] +[sig-instrumentation] Events API +test/e2e/instrumentation/common/framework.go:23 + should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:20:33.509 + Mar 7 03:20:33.509: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename events 03/07/23 03:20:33.51 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:33.528 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:33.53 + [BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 + [It] should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 + STEP: Create set of events 03/07/23 03:20:33.532 + STEP: get a list of Events with a label in the current namespace 03/07/23 03:20:33.546 + STEP: delete a list of events 03/07/23 03:20:33.549 + Mar 7 03:20:33.549: INFO: requesting DeleteCollection of events + STEP: check that the list of events matches the requested quantity 03/07/23 03:20:33.562 + [AfterEach] [sig-instrumentation] Events API + test/e2e/framework/framework.go:187 + Mar 7 03:20:33.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "events-2019" for this suite. 03/07/23 03:20:33.567 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:20:33.571 +Mar 7 03:20:33.572: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename dns 03/07/23 03:20:33.572 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:33.585 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:33.588 +[It] should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-636.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-636.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + 03/07/23 03:20:33.589 +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-636.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-636.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + 03/07/23 03:20:33.589 +STEP: creating a pod to probe /etc/hosts 03/07/23 03:20:33.59 +STEP: submitting the pod to kubernetes 03/07/23 03:20:33.59 +Mar 7 03:20:33.597: INFO: Waiting up to 15m0s for pod "dns-test-c120e7e2-cc76-4fdf-89c1-33ea934d10f6" in namespace "dns-636" to be "running" +Mar 7 03:20:33.599: INFO: Pod "dns-test-c120e7e2-cc76-4fdf-89c1-33ea934d10f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.567939ms +Mar 7 03:20:35.602: INFO: Pod "dns-test-c120e7e2-cc76-4fdf-89c1-33ea934d10f6": Phase="Running", Reason="", readiness=true. Elapsed: 2.005564109s +Mar 7 03:20:35.602: INFO: Pod "dns-test-c120e7e2-cc76-4fdf-89c1-33ea934d10f6" satisfied condition "running" +STEP: retrieving the pod 03/07/23 03:20:35.602 +STEP: looking for the results for each expected name from probers 03/07/23 03:20:35.605 +Mar 7 03:20:35.615: INFO: DNS probes using dns-636/dns-test-c120e7e2-cc76-4fdf-89c1-33ea934d10f6 succeeded + +STEP: deleting the pod 03/07/23 03:20:35.615 +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +Mar 7 03:20:35.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-636" for this suite. 03/07/23 03:20:35.634 +{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]","completed":114,"skipped":1963,"failed":0} +------------------------------ +• [2.069 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:20:33.571 + Mar 7 03:20:33.572: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename dns 03/07/23 03:20:33.572 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:33.585 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:33.588 + [It] should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-636.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-636.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + 03/07/23 03:20:33.589 + STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-636.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-636.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + 03/07/23 03:20:33.589 + STEP: creating a pod to probe /etc/hosts 03/07/23 03:20:33.59 + STEP: submitting the pod to kubernetes 03/07/23 03:20:33.59 + Mar 7 03:20:33.597: INFO: Waiting up to 15m0s for pod "dns-test-c120e7e2-cc76-4fdf-89c1-33ea934d10f6" in namespace "dns-636" to be "running" + Mar 7 03:20:33.599: INFO: Pod "dns-test-c120e7e2-cc76-4fdf-89c1-33ea934d10f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.567939ms + Mar 7 03:20:35.602: INFO: Pod "dns-test-c120e7e2-cc76-4fdf-89c1-33ea934d10f6": Phase="Running", Reason="", readiness=true. Elapsed: 2.005564109s + Mar 7 03:20:35.602: INFO: Pod "dns-test-c120e7e2-cc76-4fdf-89c1-33ea934d10f6" satisfied condition "running" + STEP: retrieving the pod 03/07/23 03:20:35.602 + STEP: looking for the results for each expected name from probers 03/07/23 03:20:35.605 + Mar 7 03:20:35.615: INFO: DNS probes using dns-636/dns-test-c120e7e2-cc76-4fdf-89c1-33ea934d10f6 succeeded + + STEP: deleting the pod 03/07/23 03:20:35.615 + [AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 + Mar 7 03:20:35.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "dns-636" for this suite. 03/07/23 03:20:35.634 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:289 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:20:35.641 +Mar 7 03:20:35.641: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename taint-single-pod 03/07/23 03:20:35.642 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:35.659 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:35.661 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/node/taints.go:166 +Mar 7 03:20:35.663: INFO: Waiting up to 1m0s for all nodes to be ready +Mar 7 03:21:35.693: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:289 +Mar 7 03:21:35.696: INFO: Starting informer... +STEP: Starting pod... 03/07/23 03:21:35.696 +Mar 7 03:21:35.930: INFO: Pod is running on node-2. Tainting Node +STEP: Trying to apply a taint on the Node 03/07/23 03:21:35.93 +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 03/07/23 03:21:35.94 +STEP: Waiting short time to make sure Pod is queued for deletion 03/07/23 03:21:35.948 +Mar 7 03:21:35.948: INFO: Pod wasn't evicted. Proceeding +Mar 7 03:21:35.948: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 03/07/23 03:21:35.974 +STEP: Waiting some time to make sure that toleration time passed. 03/07/23 03:21:35.991 +Mar 7 03:22:50.992: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:22:50.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-1402" for this suite. 03/07/23 03:22:50.996 +{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","completed":115,"skipped":1972,"failed":0} +------------------------------ +• [SLOW TEST] [135.385 seconds] +[sig-node] NoExecuteTaintManager Single Pod [Serial] +test/e2e/node/framework.go:23 + removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:289 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:20:35.641 + Mar 7 03:20:35.641: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename taint-single-pod 03/07/23 03:20:35.642 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:20:35.659 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:20:35.661 + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/node/taints.go:166 + Mar 7 03:20:35.663: INFO: Waiting up to 1m0s for all nodes to be ready + Mar 7 03:21:35.693: INFO: Waiting for terminating namespaces to be deleted... + [It] removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:289 + Mar 7 03:21:35.696: INFO: Starting informer... + STEP: Starting pod... 03/07/23 03:21:35.696 + Mar 7 03:21:35.930: INFO: Pod is running on node-2. Tainting Node + STEP: Trying to apply a taint on the Node 03/07/23 03:21:35.93 + STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 03/07/23 03:21:35.94 + STEP: Waiting short time to make sure Pod is queued for deletion 03/07/23 03:21:35.948 + Mar 7 03:21:35.948: INFO: Pod wasn't evicted. Proceeding + Mar 7 03:21:35.948: INFO: Removing taint from Node + STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 03/07/23 03:21:35.974 + STEP: Waiting some time to make sure that toleration time passed. 03/07/23 03:21:35.991 + Mar 7 03:22:50.992: INFO: Pod wasn't evicted. Test successful + [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:22:50.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "taint-single-pod-1402" for this suite. 03/07/23 03:22:50.996 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:83 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:22:51.027 +Mar 7 03:22:51.027: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:22:51.028 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:22:51.042 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:22:51.046 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:83 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:22:51.048 +Mar 7 03:22:51.054: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d" in namespace "projected-1439" to be "Succeeded or Failed" +Mar 7 03:22:51.056: INFO: Pod "downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209317ms +Mar 7 03:22:53.060: INFO: Pod "downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006140137s +Mar 7 03:22:55.063: INFO: Pod "downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009378525s +STEP: Saw pod success 03/07/23 03:22:55.063 +Mar 7 03:22:55.063: INFO: Pod "downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d" satisfied condition "Succeeded or Failed" +Mar 7 03:22:55.066: INFO: Trying to get logs from node node-2 pod downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d container client-container: +STEP: delete the pod 03/07/23 03:22:55.078 +Mar 7 03:22:55.087: INFO: Waiting for pod downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d to disappear +Mar 7 03:22:55.089: INFO: Pod downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 03:22:55.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1439" for this suite. 03/07/23 03:22:55.092 +{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","completed":116,"skipped":1983,"failed":0} +------------------------------ +• [4.069 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:83 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:22:51.027 + Mar 7 03:22:51.027: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:22:51.028 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:22:51.042 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:22:51.046 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:83 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:22:51.048 + Mar 7 03:22:51.054: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d" in namespace "projected-1439" to be "Succeeded or Failed" + Mar 7 03:22:51.056: INFO: Pod "downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209317ms + Mar 7 03:22:53.060: INFO: Pod "downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006140137s + Mar 7 03:22:55.063: INFO: Pod "downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009378525s + STEP: Saw pod success 03/07/23 03:22:55.063 + Mar 7 03:22:55.063: INFO: Pod "downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d" satisfied condition "Succeeded or Failed" + Mar 7 03:22:55.066: INFO: Trying to get logs from node node-2 pod downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d container client-container: + STEP: delete the pod 03/07/23 03:22:55.078 + Mar 7 03:22:55.087: INFO: Waiting for pod downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d to disappear + Mar 7 03:22:55.089: INFO: Pod downwardapi-volume-70160ffa-198d-4f32-baee-37afe679124d no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 03:22:55.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-1439" for this suite. 03/07/23 03:22:55.092 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1481 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:22:55.097 +Mar 7 03:22:55.097: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:22:55.098 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:22:55.11 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:22:55.112 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1481 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-65 03/07/23 03:22:55.114 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 03/07/23 03:22:55.126 +STEP: creating service externalsvc in namespace services-65 03/07/23 03:22:55.127 +STEP: creating replication controller externalsvc in namespace services-65 03/07/23 03:22:55.168 +I0307 03:22:55.174447 22 runners.go:193] Created replication controller with name: externalsvc, namespace: services-65, replica count: 2 +I0307 03:22:58.225381 22 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName 03/07/23 03:22:58.227 +Mar 7 03:22:58.259: INFO: Creating new exec pod +Mar 7 03:22:58.267: INFO: Waiting up to 5m0s for pod "execpodbctbs" in namespace "services-65" to be "running" +Mar 7 03:22:58.272: INFO: Pod "execpodbctbs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445906ms +Mar 7 03:23:00.275: INFO: Pod "execpodbctbs": Phase="Running", Reason="", readiness=true. Elapsed: 2.007823527s +Mar 7 03:23:00.275: INFO: Pod "execpodbctbs" satisfied condition "running" +Mar 7 03:23:00.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-65 exec execpodbctbs -- /bin/sh -x -c nslookup clusterip-service.services-65.svc.cluster.local' +Mar 7 03:23:00.495: INFO: stderr: "+ nslookup clusterip-service.services-65.svc.cluster.local\n" +Mar 7 03:23:00.495: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-65.svc.cluster.local\tcanonical name = externalsvc.services-65.svc.cluster.local.\nName:\texternalsvc.services-65.svc.cluster.local\nAddress: 10.97.173.121\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-65, will wait for the garbage collector to delete the pods 03/07/23 03:23:00.495 +Mar 7 03:23:00.554: INFO: Deleting ReplicationController externalsvc took: 5.069991ms +Mar 7 03:23:00.655: INFO: Terminating ReplicationController externalsvc pods took: 101.020281ms +Mar 7 03:23:02.578: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:23:02.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-65" for this suite. 03/07/23 03:23:02.596 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","completed":117,"skipped":1993,"failed":0} +------------------------------ +• [SLOW TEST] [7.504 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1481 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:22:55.097 + Mar 7 03:22:55.097: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:22:55.098 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:22:55.11 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:22:55.112 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1481 + STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-65 03/07/23 03:22:55.114 + STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 03/07/23 03:22:55.126 + STEP: creating service externalsvc in namespace services-65 03/07/23 03:22:55.127 + STEP: creating replication controller externalsvc in namespace services-65 03/07/23 03:22:55.168 + I0307 03:22:55.174447 22 runners.go:193] Created replication controller with name: externalsvc, namespace: services-65, replica count: 2 + I0307 03:22:58.225381 22 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + STEP: changing the ClusterIP service to type=ExternalName 03/07/23 03:22:58.227 + Mar 7 03:22:58.259: INFO: Creating new exec pod + Mar 7 03:22:58.267: INFO: Waiting up to 5m0s for pod "execpodbctbs" in namespace "services-65" to be "running" + Mar 7 03:22:58.272: INFO: Pod "execpodbctbs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445906ms + Mar 7 03:23:00.275: INFO: Pod "execpodbctbs": Phase="Running", Reason="", readiness=true. Elapsed: 2.007823527s + Mar 7 03:23:00.275: INFO: Pod "execpodbctbs" satisfied condition "running" + Mar 7 03:23:00.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-65 exec execpodbctbs -- /bin/sh -x -c nslookup clusterip-service.services-65.svc.cluster.local' + Mar 7 03:23:00.495: INFO: stderr: "+ nslookup clusterip-service.services-65.svc.cluster.local\n" + Mar 7 03:23:00.495: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-65.svc.cluster.local\tcanonical name = externalsvc.services-65.svc.cluster.local.\nName:\texternalsvc.services-65.svc.cluster.local\nAddress: 10.97.173.121\n\n" + STEP: deleting ReplicationController externalsvc in namespace services-65, will wait for the garbage collector to delete the pods 03/07/23 03:23:00.495 + Mar 7 03:23:00.554: INFO: Deleting ReplicationController externalsvc took: 5.069991ms + Mar 7 03:23:00.655: INFO: Terminating ReplicationController externalsvc pods took: 101.020281ms + Mar 7 03:23:02.578: INFO: Cleaning up the ClusterIP to ExternalName test service + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:23:02.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-65" for this suite. 03/07/23 03:23:02.596 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:116 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:23:02.601 +Mar 7 03:23:02.601: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:23:02.602 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:23:02.616 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:23:02.619 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:23:02.632 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:23:03.197 +STEP: Deploying the webhook pod 03/07/23 03:23:03.203 +STEP: Wait for the deployment to be ready 03/07/23 03:23:03.212 +Mar 7 03:23:03.217: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 03/07/23 03:23:05.226 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:23:05.248 +Mar 7 03:23:06.249: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:116 +STEP: fetching the /apis discovery document 03/07/23 03:23:06.252 +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document 03/07/23 03:23:06.253 +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document 03/07/23 03:23:06.253 +STEP: fetching the /apis/admissionregistration.k8s.io discovery document 03/07/23 03:23:06.253 +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document 03/07/23 03:23:06.254 +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document 03/07/23 03:23:06.254 +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document 03/07/23 03:23:06.254 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:23:06.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1807" for this suite. 03/07/23 03:23:06.258 +STEP: Destroying namespace "webhook-1807-markers" for this suite. 03/07/23 03:23:06.282 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","completed":118,"skipped":1997,"failed":0} +------------------------------ +• [3.727 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:116 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:23:02.601 + Mar 7 03:23:02.601: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:23:02.602 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:23:02.616 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:23:02.619 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:23:02.632 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:23:03.197 + STEP: Deploying the webhook pod 03/07/23 03:23:03.203 + STEP: Wait for the deployment to be ready 03/07/23 03:23:03.212 + Mar 7 03:23:03.217: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 03/07/23 03:23:05.226 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:23:05.248 + Mar 7 03:23:06.249: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:116 + STEP: fetching the /apis discovery document 03/07/23 03:23:06.252 + STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document 03/07/23 03:23:06.253 + STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document 03/07/23 03:23:06.253 + STEP: fetching the /apis/admissionregistration.k8s.io discovery document 03/07/23 03:23:06.253 + STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document 03/07/23 03:23:06.254 + STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document 03/07/23 03:23:06.254 + STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document 03/07/23 03:23:06.254 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:23:06.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-1807" for this suite. 03/07/23 03:23:06.258 + STEP: Destroying namespace "webhook-1807-markers" for this suite. 03/07/23 03:23:06.282 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:218 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:23:06.329 +Mar 7 03:23:06.329: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sched-preemption 03/07/23 03:23:06.33 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:23:06.357 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:23:06.366 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 +Mar 7 03:23:06.379: INFO: Waiting up to 1m0s for all nodes to be ready +Mar 7 03:24:06.428: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:218 +STEP: Create pods that use 4/5 of node resources. 03/07/23 03:24:06.433 +Mar 7 03:24:06.451: INFO: Created pod: pod0-0-sched-preemption-low-priority +Mar 7 03:24:06.456: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Mar 7 03:24:06.482: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Mar 7 03:24:06.489: INFO: Created pod: pod1-1-sched-preemption-medium-priority +Mar 7 03:24:06.508: INFO: Created pod: pod2-0-sched-preemption-medium-priority +Mar 7 03:24:06.514: INFO: Created pod: pod2-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. 03/07/23 03:24:06.514 +Mar 7 03:24:06.515: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-5229" to be "running" +Mar 7 03:24:06.517: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.505535ms +Mar 7 03:24:08.521: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006411924s +Mar 7 03:24:10.522: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007079225s +Mar 7 03:24:12.523: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008689794s +Mar 7 03:24:14.521: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006167784s +Mar 7 03:24:16.521: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.005892834s +Mar 7 03:24:18.521: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 12.006637905s +Mar 7 03:24:18.521: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" +Mar 7 03:24:18.521: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-5229" to be "running" +Mar 7 03:24:18.524: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.453393ms +Mar 7 03:24:18.524: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" +Mar 7 03:24:18.524: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-5229" to be "running" +Mar 7 03:24:18.526: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.205289ms +Mar 7 03:24:18.526: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" +Mar 7 03:24:18.526: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-5229" to be "running" +Mar 7 03:24:18.528: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 1.687571ms +Mar 7 03:24:18.528: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" +Mar 7 03:24:18.528: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-5229" to be "running" +Mar 7 03:24:18.530: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 1.990453ms +Mar 7 03:24:18.530: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" +Mar 7 03:24:18.530: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-5229" to be "running" +Mar 7 03:24:18.532: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 1.837465ms +Mar 7 03:24:18.532: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" +STEP: Run a critical pod that use same resources as that of a lower priority pod 03/07/23 03:24:18.532 +Mar 7 03:24:18.538: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" +Mar 7 03:24:18.540: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 1.990782ms +Mar 7 03:24:20.545: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006681424s +Mar 7 03:24:22.545: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.006506943s +Mar 7 03:24:22.545: INFO: Pod "critical-pod" satisfied condition "running" +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:24:22.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-5229" for this suite. 03/07/23 03:24:22.611 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","completed":119,"skipped":2008,"failed":0} +------------------------------ +• [SLOW TEST] [76.327 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:218 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:23:06.329 + Mar 7 03:23:06.329: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sched-preemption 03/07/23 03:23:06.33 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:23:06.357 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:23:06.366 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:92 + Mar 7 03:23:06.379: INFO: Waiting up to 1m0s for all nodes to be ready + Mar 7 03:24:06.428: INFO: Waiting for terminating namespaces to be deleted... + [It] validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:218 + STEP: Create pods that use 4/5 of node resources. 03/07/23 03:24:06.433 + Mar 7 03:24:06.451: INFO: Created pod: pod0-0-sched-preemption-low-priority + Mar 7 03:24:06.456: INFO: Created pod: pod0-1-sched-preemption-medium-priority + Mar 7 03:24:06.482: INFO: Created pod: pod1-0-sched-preemption-medium-priority + Mar 7 03:24:06.489: INFO: Created pod: pod1-1-sched-preemption-medium-priority + Mar 7 03:24:06.508: INFO: Created pod: pod2-0-sched-preemption-medium-priority + Mar 7 03:24:06.514: INFO: Created pod: pod2-1-sched-preemption-medium-priority + STEP: Wait for pods to be scheduled. 03/07/23 03:24:06.514 + Mar 7 03:24:06.515: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-5229" to be "running" + Mar 7 03:24:06.517: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.505535ms + Mar 7 03:24:08.521: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006411924s + Mar 7 03:24:10.522: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007079225s + Mar 7 03:24:12.523: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008689794s + Mar 7 03:24:14.521: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006167784s + Mar 7 03:24:16.521: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.005892834s + Mar 7 03:24:18.521: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 12.006637905s + Mar 7 03:24:18.521: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" + Mar 7 03:24:18.521: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-5229" to be "running" + Mar 7 03:24:18.524: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.453393ms + Mar 7 03:24:18.524: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" + Mar 7 03:24:18.524: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-5229" to be "running" + Mar 7 03:24:18.526: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.205289ms + Mar 7 03:24:18.526: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" + Mar 7 03:24:18.526: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-5229" to be "running" + Mar 7 03:24:18.528: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 1.687571ms + Mar 7 03:24:18.528: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" + Mar 7 03:24:18.528: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-5229" to be "running" + Mar 7 03:24:18.530: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 1.990453ms + Mar 7 03:24:18.530: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" + Mar 7 03:24:18.530: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-5229" to be "running" + Mar 7 03:24:18.532: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 1.837465ms + Mar 7 03:24:18.532: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" + STEP: Run a critical pod that use same resources as that of a lower priority pod 03/07/23 03:24:18.532 + Mar 7 03:24:18.538: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" + Mar 7 03:24:18.540: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 1.990782ms + Mar 7 03:24:20.545: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006681424s + Mar 7 03:24:22.545: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.006506943s + Mar 7 03:24:22.545: INFO: Pod "critical-pod" satisfied condition "running" + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:24:22.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sched-preemption-5229" for this suite. 03/07/23 03:24:22.611 + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:80 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:98 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:24:22.657 +Mar 7 03:24:22.657: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:24:22.661 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:24:22.674 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:24:22.676 +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:98 +STEP: Creating secret with name secret-test-9dc27886-0816-40b4-9862-1fb4de6bde77 03/07/23 03:24:22.696 +STEP: Creating a pod to test consume secrets 03/07/23 03:24:22.701 +Mar 7 03:24:22.711: INFO: Waiting up to 5m0s for pod "pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8" in namespace "secrets-9231" to be "Succeeded or Failed" +Mar 7 03:24:22.713: INFO: Pod "pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836267ms +Mar 7 03:24:24.716: INFO: Pod "pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005788514s +Mar 7 03:24:26.726: INFO: Pod "pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015872974s +STEP: Saw pod success 03/07/23 03:24:26.727 +Mar 7 03:24:26.727: INFO: Pod "pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8" satisfied condition "Succeeded or Failed" +Mar 7 03:24:26.729: INFO: Trying to get logs from node node-2 pod pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8 container secret-volume-test: +STEP: delete the pod 03/07/23 03:24:26.74 +Mar 7 03:24:26.749: INFO: Waiting for pod pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8 to disappear +Mar 7 03:24:26.752: INFO: Pod pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:24:26.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9231" for this suite. 03/07/23 03:24:26.755 +STEP: Destroying namespace "secret-namespace-7605" for this suite. 03/07/23 03:24:26.759 +{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","completed":120,"skipped":2026,"failed":0} +------------------------------ +• [4.107 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:98 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:24:22.657 + Mar 7 03:24:22.657: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:24:22.661 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:24:22.674 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:24:22.676 + [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:98 + STEP: Creating secret with name secret-test-9dc27886-0816-40b4-9862-1fb4de6bde77 03/07/23 03:24:22.696 + STEP: Creating a pod to test consume secrets 03/07/23 03:24:22.701 + Mar 7 03:24:22.711: INFO: Waiting up to 5m0s for pod "pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8" in namespace "secrets-9231" to be "Succeeded or Failed" + Mar 7 03:24:22.713: INFO: Pod "pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836267ms + Mar 7 03:24:24.716: INFO: Pod "pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005788514s + Mar 7 03:24:26.726: INFO: Pod "pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015872974s + STEP: Saw pod success 03/07/23 03:24:26.727 + Mar 7 03:24:26.727: INFO: Pod "pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8" satisfied condition "Succeeded or Failed" + Mar 7 03:24:26.729: INFO: Trying to get logs from node node-2 pod pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8 container secret-volume-test: + STEP: delete the pod 03/07/23 03:24:26.74 + Mar 7 03:24:26.749: INFO: Waiting for pod pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8 to disappear + Mar 7 03:24:26.752: INFO: Pod pod-secrets-e4037889-4f28-4af5-bdea-e91778b071e8 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:24:26.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-9231" for this suite. 03/07/23 03:24:26.755 + STEP: Destroying namespace "secret-namespace-7605" for this suite. 03/07/23 03:24:26.759 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:220 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:24:26.764 +Mar 7 03:24:26.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:24:26.765 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:24:26.777 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:24:26.779 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:220 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:24:26.781 +Mar 7 03:24:26.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6" in namespace "downward-api-7938" to be "Succeeded or Failed" +Mar 7 03:24:26.790: INFO: Pod "downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.370469ms +Mar 7 03:24:28.794: INFO: Pod "downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007685391s +Mar 7 03:24:30.795: INFO: Pod "downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008054663s +STEP: Saw pod success 03/07/23 03:24:30.795 +Mar 7 03:24:30.795: INFO: Pod "downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6" satisfied condition "Succeeded or Failed" +Mar 7 03:24:30.797: INFO: Trying to get logs from node node-2 pod downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6 container client-container: +STEP: delete the pod 03/07/23 03:24:30.801 +Mar 7 03:24:30.809: INFO: Waiting for pod downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6 to disappear +Mar 7 03:24:30.811: INFO: Pod downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 03:24:30.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7938" for this suite. 03/07/23 03:24:30.814 +{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","completed":121,"skipped":2042,"failed":0} +------------------------------ +• [4.057 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:220 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:24:26.764 + Mar 7 03:24:26.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:24:26.765 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:24:26.777 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:24:26.779 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:220 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:24:26.781 + Mar 7 03:24:26.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6" in namespace "downward-api-7938" to be "Succeeded or Failed" + Mar 7 03:24:26.790: INFO: Pod "downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.370469ms + Mar 7 03:24:28.794: INFO: Pod "downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007685391s + Mar 7 03:24:30.795: INFO: Pod "downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008054663s + STEP: Saw pod success 03/07/23 03:24:30.795 + Mar 7 03:24:30.795: INFO: Pod "downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6" satisfied condition "Succeeded or Failed" + Mar 7 03:24:30.797: INFO: Trying to get logs from node node-2 pod downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6 container client-container: + STEP: delete the pod 03/07/23 03:24:30.801 + Mar 7 03:24:30.809: INFO: Waiting for pod downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6 to disappear + Mar 7 03:24:30.811: INFO: Pod downwardapi-volume-9e28b2f6-8cc7-41a4-93eb-cd1e933e7db6 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 03:24:30.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-7938" for this suite. 03/07/23 03:24:30.814 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:24:30.821 +Mar 7 03:24:30.821: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename gc 03/07/23 03:24:30.823 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:24:30.839 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:24:30.841 +[It] should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 +STEP: create the deployment 03/07/23 03:24:30.842 +STEP: Wait for the Deployment to create new ReplicaSet 03/07/23 03:24:30.846 +STEP: delete the deployment 03/07/23 03:24:31.353 +STEP: wait for all rs to be garbage collected 03/07/23 03:24:31.364 +STEP: expected 0 rs, got 1 rs 03/07/23 03:24:31.369 +STEP: expected 0 pods, got 2 pods 03/07/23 03:24:31.375 +STEP: Gathering metrics 03/07/23 03:24:31.882 +Mar 7 03:24:31.928: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" +Mar 7 03:24:31.931: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.904467ms +Mar 7 03:24:31.931: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) +Mar 7 03:24:31.931: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" +E0307 03:24:31.982882 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:31.982882 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:33.005990 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:33.005990 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:34.026430 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:34.026430 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:36.075653 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:36.075653 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:37.098057 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:37.098057 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:38.119154 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:38.119154 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:39.145035 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:39.145035 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:40.166792 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:40.166792 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:41.193053 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:41.193053 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:42.212087 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:42.212087 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:45.292690 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:45.292690 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:46.317697 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:46.317697 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:47.337756 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:47.337756 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:48.362727 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:48.362727 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:51.434874 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:51.434874 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:52.458319 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:52.458319 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:55.253878 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:55.253878 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:56.274517 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:56.274517 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:57.294439 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:57.294439 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:58.314434 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:24:58.314434 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:01.381083 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:01.381083 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:02.403521 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:02.403521 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:03.424181 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:03.424181 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:05.232494 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:05.232494 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:07.273599 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:07.273599 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:09.321231 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:09.321231 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:11.367448 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:11.367448 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:14.436168 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:14.436168 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:15.455373 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:15.455373 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:16.232309 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:16.232309 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:18.286635 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:18.286635 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:19.305846 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:19.305846 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:20.325654 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:20.325654 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:21.345960 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:21.345960 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:22.366078 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:22.366078 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:23.390429 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:23.390429 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:24.416808 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:24.416808 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:26.462040 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:26.462040 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:27.233502 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:27.233502 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:28.253646 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:28.253646 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:29.277730 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:29.277730 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:31.319751 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:31.319751 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:33.366935 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:33.366935 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:34.391666 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:34.391666 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:36.439481 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:36.439481 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:37.459914 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:37.459914 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:38.234537 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:38.234537 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:39.259894 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:39.259894 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:40.281572 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:40.281572 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:41.312120 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:41.312120 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:42.335224 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:42.335224 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:43.357620 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:43.357620 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:44.378873 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:44.378873 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:46.422908 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:46.422908 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:48.469245 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:48.469245 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:48.492071 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:48.492071 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:49.511664 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:49.511664 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:52.587885 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:52.587885 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:53.616742 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:53.616742 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:25:58.730640 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +Mar 7 03:25:58.730: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +Mar 7 03:25:58.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +E0307 03:25:58.730640 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +STEP: Destroying namespace "gc-8532" for this suite. 03/07/23 03:25:58.735 +{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","completed":122,"skipped":2057,"failed":0} +------------------------------ +• [SLOW TEST] [87.918 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:24:30.821 + Mar 7 03:24:30.821: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename gc 03/07/23 03:24:30.823 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:24:30.839 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:24:30.841 + [It] should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 + STEP: create the deployment 03/07/23 03:24:30.842 + STEP: Wait for the Deployment to create new ReplicaSet 03/07/23 03:24:30.846 + STEP: delete the deployment 03/07/23 03:24:31.353 + STEP: wait for all rs to be garbage collected 03/07/23 03:24:31.364 + STEP: expected 0 rs, got 1 rs 03/07/23 03:24:31.369 + STEP: expected 0 pods, got 2 pods 03/07/23 03:24:31.375 + STEP: Gathering metrics 03/07/23 03:24:31.882 + Mar 7 03:24:31.928: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" + Mar 7 03:24:31.931: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.904467ms + Mar 7 03:24:31.931: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) + Mar 7 03:24:31.931: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" + E0307 03:24:31.982882 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:33.005990 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:34.026430 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:36.075653 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:37.098057 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:38.119154 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:39.145035 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:40.166792 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:41.193053 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:42.212087 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:45.292690 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:46.317697 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:47.337756 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:48.362727 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:51.434874 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:52.458319 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:55.253878 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:56.274517 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:57.294439 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:24:58.314434 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:01.381083 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:02.403521 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:03.424181 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:05.232494 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:07.273599 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:09.321231 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:11.367448 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:14.436168 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:15.455373 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:16.232309 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:18.286635 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:19.305846 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:20.325654 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:21.345960 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:22.366078 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:23.390429 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:24.416808 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:26.462040 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:27.233502 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:28.253646 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:29.277730 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:31.319751 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:33.366935 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:34.391666 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:36.439481 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:37.459914 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:38.234537 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:39.259894 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:40.281572 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:41.312120 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:42.335224 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:43.357620 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:44.378873 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:46.422908 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:48.469245 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:48.492071 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:49.511664 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:52.587885 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:53.616742 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:25:58.730640 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + Mar 7 03:25:58.730: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 + Mar 7 03:25:58.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "gc-8532" for this suite. 03/07/23 03:25:58.735 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3231 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:25:58.741 +Mar 7 03:25:58.743: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:25:58.744 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:25:58.757 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:25:58.759 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3231 +STEP: creating an Endpoint 03/07/23 03:25:58.764 +STEP: waiting for available Endpoint 03/07/23 03:25:58.767 +STEP: listing all Endpoints 03/07/23 03:25:58.768 +STEP: updating the Endpoint 03/07/23 03:25:58.773 +STEP: fetching the Endpoint 03/07/23 03:25:58.777 +STEP: patching the Endpoint 03/07/23 03:25:58.779 +STEP: fetching the Endpoint 03/07/23 03:25:58.784 +STEP: deleting the Endpoint by Collection 03/07/23 03:25:58.786 +STEP: waiting for Endpoint deletion 03/07/23 03:25:58.792 +STEP: fetching the Endpoint 03/07/23 03:25:58.793 +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:25:58.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4228" for this suite. 03/07/23 03:25:58.799 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","completed":123,"skipped":2061,"failed":0} +------------------------------ +• [0.063 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3231 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:25:58.741 + Mar 7 03:25:58.743: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:25:58.744 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:25:58.757 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:25:58.759 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3231 + STEP: creating an Endpoint 03/07/23 03:25:58.764 + STEP: waiting for available Endpoint 03/07/23 03:25:58.767 + STEP: listing all Endpoints 03/07/23 03:25:58.768 + STEP: updating the Endpoint 03/07/23 03:25:58.773 + STEP: fetching the Endpoint 03/07/23 03:25:58.777 + STEP: patching the Endpoint 03/07/23 03:25:58.779 + STEP: fetching the Endpoint 03/07/23 03:25:58.784 + STEP: deleting the Endpoint by Collection 03/07/23 03:25:58.786 + STEP: waiting for Endpoint deletion 03/07/23 03:25:58.792 + STEP: fetching the Endpoint 03/07/23 03:25:58.793 + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:25:58.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-4228" for this suite. 03/07/23 03:25:58.799 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:397 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:25:58.805 +Mar 7 03:25:58.805: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 03:25:58.805 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:25:58.822 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:25:58.826 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:397 +STEP: creating the pod 03/07/23 03:25:58.828 +STEP: submitting the pod to kubernetes 03/07/23 03:25:58.828 +Mar 7 03:25:58.834: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325" in namespace "pods-9601" to be "running and ready" +Mar 7 03:25:58.836: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.496392ms +Mar 7 03:25:58.836: INFO: The phase of Pod pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:26:00.840: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325": Phase="Running", Reason="", readiness=true. Elapsed: 2.005915438s +Mar 7 03:26:00.840: INFO: The phase of Pod pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325 is Running (Ready = true) +Mar 7 03:26:00.840: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325" satisfied condition "running and ready" +STEP: verifying the pod is in kubernetes 03/07/23 03:26:00.842 +STEP: updating the pod 03/07/23 03:26:00.844 +Mar 7 03:26:01.373: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325" +Mar 7 03:26:01.373: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325" in namespace "pods-9601" to be "terminated with reason DeadlineExceeded" +Mar 7 03:26:01.377: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325": Phase="Running", Reason="", readiness=true. Elapsed: 3.688028ms +Mar 7 03:26:03.408: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325": Phase="Running", Reason="", readiness=true. Elapsed: 2.034909627s +Mar 7 03:26:05.380: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.006544497s +Mar 7 03:26:05.380: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325" satisfied condition "terminated with reason DeadlineExceeded" +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +Mar 7 03:26:05.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9601" for this suite. 03/07/23 03:26:05.386 +{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","completed":124,"skipped":2103,"failed":0} +------------------------------ +• [SLOW TEST] [6.587 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:397 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:25:58.805 + Mar 7 03:25:58.805: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 03:25:58.805 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:25:58.822 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:25:58.826 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 + [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:397 + STEP: creating the pod 03/07/23 03:25:58.828 + STEP: submitting the pod to kubernetes 03/07/23 03:25:58.828 + Mar 7 03:25:58.834: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325" in namespace "pods-9601" to be "running and ready" + Mar 7 03:25:58.836: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.496392ms + Mar 7 03:25:58.836: INFO: The phase of Pod pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:26:00.840: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325": Phase="Running", Reason="", readiness=true. Elapsed: 2.005915438s + Mar 7 03:26:00.840: INFO: The phase of Pod pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325 is Running (Ready = true) + Mar 7 03:26:00.840: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325" satisfied condition "running and ready" + STEP: verifying the pod is in kubernetes 03/07/23 03:26:00.842 + STEP: updating the pod 03/07/23 03:26:00.844 + Mar 7 03:26:01.373: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325" + Mar 7 03:26:01.373: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325" in namespace "pods-9601" to be "terminated with reason DeadlineExceeded" + Mar 7 03:26:01.377: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325": Phase="Running", Reason="", readiness=true. Elapsed: 3.688028ms + Mar 7 03:26:03.408: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325": Phase="Running", Reason="", readiness=true. Elapsed: 2.034909627s + Mar 7 03:26:05.380: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.006544497s + Mar 7 03:26:05.380: INFO: Pod "pod-update-activedeadlineseconds-ac3dab1b-057d-4e52-bc02-ee738cfd5325" satisfied condition "terminated with reason DeadlineExceeded" + [AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 + Mar 7 03:26:05.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-9601" for this suite. 03/07/23 03:26:05.386 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:151 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:26:05.392 +Mar 7 03:26:05.392: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename var-expansion 03/07/23 03:26:05.393 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:05.409 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:05.411 +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:151 +Mar 7 03:26:05.418: INFO: Waiting up to 2m0s for pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3" in namespace "var-expansion-5696" to be "container 0 failed with reason CreateContainerConfigError" +Mar 7 03:26:05.422: INFO: Pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.509071ms +Mar 7 03:26:07.425: INFO: Pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007430481s +Mar 7 03:26:07.425: INFO: Pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3" satisfied condition "container 0 failed with reason CreateContainerConfigError" +Mar 7 03:26:07.426: INFO: Deleting pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3" in namespace "var-expansion-5696" +Mar 7 03:26:07.430: INFO: Wait up to 5m0s for pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +Mar 7 03:26:09.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5696" for this suite. 03/07/23 03:26:09.437 +{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","completed":125,"skipped":2106,"failed":0} +------------------------------ +• [4.050 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:151 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:26:05.392 + Mar 7 03:26:05.392: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename var-expansion 03/07/23 03:26:05.393 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:05.409 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:05.411 + [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:151 + Mar 7 03:26:05.418: INFO: Waiting up to 2m0s for pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3" in namespace "var-expansion-5696" to be "container 0 failed with reason CreateContainerConfigError" + Mar 7 03:26:05.422: INFO: Pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.509071ms + Mar 7 03:26:07.425: INFO: Pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007430481s + Mar 7 03:26:07.425: INFO: Pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3" satisfied condition "container 0 failed with reason CreateContainerConfigError" + Mar 7 03:26:07.426: INFO: Deleting pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3" in namespace "var-expansion-5696" + Mar 7 03:26:07.430: INFO: Wait up to 5m0s for pod "var-expansion-247e4abb-1e97-4943-a169-166d1bcd08e3" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 + Mar 7 03:26:09.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "var-expansion-5696" for this suite. 03/07/23 03:26:09.437 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:123 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:26:09.443 +Mar 7 03:26:09.443: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:26:09.444 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:09.457 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:09.459 +[It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:123 +STEP: Creating configMap with name configmap-test-upd-2cff35d7-def5-4926-bc58-7f8362ec96c6 03/07/23 03:26:09.464 +STEP: Creating the pod 03/07/23 03:26:09.467 +Mar 7 03:26:09.473: INFO: Waiting up to 5m0s for pod "pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8" in namespace "configmap-6999" to be "running and ready" +Mar 7 03:26:09.479: INFO: Pod "pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.541632ms +Mar 7 03:26:09.479: INFO: The phase of Pod pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:26:11.482: INFO: Pod "pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8": Phase="Running", Reason="", readiness=true. Elapsed: 2.00950823s +Mar 7 03:26:11.482: INFO: The phase of Pod pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8 is Running (Ready = true) +Mar 7 03:26:11.482: INFO: Pod "pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8" satisfied condition "running and ready" +STEP: Updating configmap configmap-test-upd-2cff35d7-def5-4926-bc58-7f8362ec96c6 03/07/23 03:26:11.496 +STEP: waiting to observe update in volume 03/07/23 03:26:11.499 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:26:13.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6999" for this suite. 03/07/23 03:26:13.513 +{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","completed":126,"skipped":2110,"failed":0} +------------------------------ +• [4.075 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:123 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:26:09.443 + Mar 7 03:26:09.443: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:26:09.444 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:09.457 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:09.459 + [It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:123 + STEP: Creating configMap with name configmap-test-upd-2cff35d7-def5-4926-bc58-7f8362ec96c6 03/07/23 03:26:09.464 + STEP: Creating the pod 03/07/23 03:26:09.467 + Mar 7 03:26:09.473: INFO: Waiting up to 5m0s for pod "pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8" in namespace "configmap-6999" to be "running and ready" + Mar 7 03:26:09.479: INFO: Pod "pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.541632ms + Mar 7 03:26:09.479: INFO: The phase of Pod pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:26:11.482: INFO: Pod "pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8": Phase="Running", Reason="", readiness=true. Elapsed: 2.00950823s + Mar 7 03:26:11.482: INFO: The phase of Pod pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8 is Running (Ready = true) + Mar 7 03:26:11.482: INFO: Pod "pod-configmaps-94a5f753-59f9-46ad-aa01-f166884387e8" satisfied condition "running and ready" + STEP: Updating configmap configmap-test-upd-2cff35d7-def5-4926-bc58-7f8362ec96c6 03/07/23 03:26:11.496 + STEP: waiting to observe update in volume 03/07/23 03:26:11.499 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:26:13.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-6999" for this suite. 03/07/23 03:26:13.513 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:193 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:26:13.52 +Mar 7 03:26:13.520: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:26:13.521 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:13.538 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:13.541 +[It] works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:193 +Mar 7 03:26:13.543: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 03/07/23 03:26:17.15 +Mar 7 03:26:17.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7148 --namespace=crd-publish-openapi-7148 create -f -' +Mar 7 03:26:18.037: INFO: stderr: "" +Mar 7 03:26:18.037: INFO: stdout: "e2e-test-crd-publish-openapi-2508-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Mar 7 03:26:18.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7148 --namespace=crd-publish-openapi-7148 delete e2e-test-crd-publish-openapi-2508-crds test-cr' +Mar 7 03:26:18.227: INFO: stderr: "" +Mar 7 03:26:18.228: INFO: stdout: "e2e-test-crd-publish-openapi-2508-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Mar 7 03:26:18.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7148 --namespace=crd-publish-openapi-7148 apply -f -' +Mar 7 03:26:18.991: INFO: stderr: "" +Mar 7 03:26:18.991: INFO: stdout: "e2e-test-crd-publish-openapi-2508-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Mar 7 03:26:18.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7148 --namespace=crd-publish-openapi-7148 delete e2e-test-crd-publish-openapi-2508-crds test-cr' +Mar 7 03:26:19.196: INFO: stderr: "" +Mar 7 03:26:19.196: INFO: stdout: "e2e-test-crd-publish-openapi-2508-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR 03/07/23 03:26:19.196 +Mar 7 03:26:19.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7148 explain e2e-test-crd-publish-openapi-2508-crds' +Mar 7 03:26:19.518: INFO: stderr: "" +Mar 7 03:26:19.518: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2508-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:26:22.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7148" for this suite. 03/07/23 03:26:23.004 +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","completed":127,"skipped":2161,"failed":0} +------------------------------ +• [SLOW TEST] [9.515 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:193 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:26:13.52 + Mar 7 03:26:13.520: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:26:13.521 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:13.538 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:13.541 + [It] works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:193 + Mar 7 03:26:13.543: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 03/07/23 03:26:17.15 + Mar 7 03:26:17.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7148 --namespace=crd-publish-openapi-7148 create -f -' + Mar 7 03:26:18.037: INFO: stderr: "" + Mar 7 03:26:18.037: INFO: stdout: "e2e-test-crd-publish-openapi-2508-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" + Mar 7 03:26:18.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7148 --namespace=crd-publish-openapi-7148 delete e2e-test-crd-publish-openapi-2508-crds test-cr' + Mar 7 03:26:18.227: INFO: stderr: "" + Mar 7 03:26:18.228: INFO: stdout: "e2e-test-crd-publish-openapi-2508-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" + Mar 7 03:26:18.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7148 --namespace=crd-publish-openapi-7148 apply -f -' + Mar 7 03:26:18.991: INFO: stderr: "" + Mar 7 03:26:18.991: INFO: stdout: "e2e-test-crd-publish-openapi-2508-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" + Mar 7 03:26:18.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7148 --namespace=crd-publish-openapi-7148 delete e2e-test-crd-publish-openapi-2508-crds test-cr' + Mar 7 03:26:19.196: INFO: stderr: "" + Mar 7 03:26:19.196: INFO: stdout: "e2e-test-crd-publish-openapi-2508-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR 03/07/23 03:26:19.196 + Mar 7 03:26:19.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7148 explain e2e-test-crd-publish-openapi-2508-crds' + Mar 7 03:26:19.518: INFO: stderr: "" + Mar 7 03:26:19.518: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2508-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:26:22.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-publish-openapi-7148" for this suite. 03/07/23 03:26:23.004 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:441 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:26:23.035 +Mar 7 03:26:23.035: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:26:23.036 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:23.047 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:23.05 +[It] removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:441 +STEP: set up a multi version CRD 03/07/23 03:26:23.052 +Mar 7 03:26:23.052: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: mark a version not serverd 03/07/23 03:26:33.711 +STEP: check the unserved version gets removed 03/07/23 03:26:33.753 +STEP: check the other version is not changed 03/07/23 03:26:38.262 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:26:46.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-2448" for this suite. 03/07/23 03:26:47.003 +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","completed":128,"skipped":2178,"failed":0} +------------------------------ +• [SLOW TEST] [23.972 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:441 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:26:23.035 + Mar 7 03:26:23.035: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:26:23.036 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:23.047 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:23.05 + [It] removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:441 + STEP: set up a multi version CRD 03/07/23 03:26:23.052 + Mar 7 03:26:23.052: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: mark a version not serverd 03/07/23 03:26:33.711 + STEP: check the unserved version gets removed 03/07/23 03:26:33.753 + STEP: check the other version is not changed 03/07/23 03:26:38.262 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:26:46.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-publish-openapi-2448" for this suite. 03/07/23 03:26:47.003 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:26:47.008 +Mar 7 03:26:47.008: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename dns 03/07/23 03:26:47.009 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:47.024 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:47.026 +[It] should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 +STEP: Creating a test headless service 03/07/23 03:26:47.028 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2179.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2179.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + 03/07/23 03:26:47.032 +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2179.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2179.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + 03/07/23 03:26:47.032 +STEP: creating a pod to probe DNS 03/07/23 03:26:47.033 +STEP: submitting the pod to kubernetes 03/07/23 03:26:47.033 +Mar 7 03:26:47.045: INFO: Waiting up to 15m0s for pod "dns-test-25eb8416-b2ac-4460-8435-e0523bfe8454" in namespace "dns-2179" to be "running" +Mar 7 03:26:47.047: INFO: Pod "dns-test-25eb8416-b2ac-4460-8435-e0523bfe8454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14411ms +Mar 7 03:26:49.051: INFO: Pod "dns-test-25eb8416-b2ac-4460-8435-e0523bfe8454": Phase="Running", Reason="", readiness=true. Elapsed: 2.00565248s +Mar 7 03:26:49.051: INFO: Pod "dns-test-25eb8416-b2ac-4460-8435-e0523bfe8454" satisfied condition "running" +STEP: retrieving the pod 03/07/23 03:26:49.051 +STEP: looking for the results for each expected name from probers 03/07/23 03:26:49.053 +Mar 7 03:26:49.063: INFO: DNS probes using dns-2179/dns-test-25eb8416-b2ac-4460-8435-e0523bfe8454 succeeded + +STEP: deleting the pod 03/07/23 03:26:49.063 +STEP: deleting the test headless service 03/07/23 03:26:49.078 +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +Mar 7 03:26:49.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-2179" for this suite. 03/07/23 03:26:49.102 +{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [Conformance]","completed":129,"skipped":2186,"failed":0} +------------------------------ +• [2.104 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:26:47.008 + Mar 7 03:26:47.008: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename dns 03/07/23 03:26:47.009 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:47.024 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:47.026 + [It] should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 + STEP: Creating a test headless service 03/07/23 03:26:47.028 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2179.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2179.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + 03/07/23 03:26:47.032 + STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2179.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2179.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + 03/07/23 03:26:47.032 + STEP: creating a pod to probe DNS 03/07/23 03:26:47.033 + STEP: submitting the pod to kubernetes 03/07/23 03:26:47.033 + Mar 7 03:26:47.045: INFO: Waiting up to 15m0s for pod "dns-test-25eb8416-b2ac-4460-8435-e0523bfe8454" in namespace "dns-2179" to be "running" + Mar 7 03:26:47.047: INFO: Pod "dns-test-25eb8416-b2ac-4460-8435-e0523bfe8454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14411ms + Mar 7 03:26:49.051: INFO: Pod "dns-test-25eb8416-b2ac-4460-8435-e0523bfe8454": Phase="Running", Reason="", readiness=true. Elapsed: 2.00565248s + Mar 7 03:26:49.051: INFO: Pod "dns-test-25eb8416-b2ac-4460-8435-e0523bfe8454" satisfied condition "running" + STEP: retrieving the pod 03/07/23 03:26:49.051 + STEP: looking for the results for each expected name from probers 03/07/23 03:26:49.053 + Mar 7 03:26:49.063: INFO: DNS probes using dns-2179/dns-test-25eb8416-b2ac-4460-8435-e0523bfe8454 succeeded + + STEP: deleting the pod 03/07/23 03:26:49.063 + STEP: deleting the test headless service 03/07/23 03:26:49.078 + [AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 + Mar 7 03:26:49.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "dns-2179" for this suite. 03/07/23 03:26:49.102 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:337 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:26:49.113 +Mar 7 03:26:49.113: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:26:49.114 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:49.143 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:49.145 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:324 +[It] should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:337 +STEP: creating a replication controller 03/07/23 03:26:49.147 +Mar 7 03:26:49.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 create -f -' +Mar 7 03:26:49.905: INFO: stderr: "" +Mar 7 03:26:49.905: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. 03/07/23 03:26:49.906 +Mar 7 03:26:49.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Mar 7 03:26:50.099: INFO: stderr: "" +Mar 7 03:26:50.099: INFO: stdout: "update-demo-nautilus-5lr2n update-demo-nautilus-z7mvr " +Mar 7 03:26:50.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods update-demo-nautilus-5lr2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Mar 7 03:26:50.267: INFO: stderr: "" +Mar 7 03:26:50.267: INFO: stdout: "" +Mar 7 03:26:50.267: INFO: update-demo-nautilus-5lr2n is created but not running +Mar 7 03:26:55.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Mar 7 03:26:55.430: INFO: stderr: "" +Mar 7 03:26:55.430: INFO: stdout: "update-demo-nautilus-5lr2n update-demo-nautilus-z7mvr " +Mar 7 03:26:55.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods update-demo-nautilus-5lr2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Mar 7 03:26:55.587: INFO: stderr: "" +Mar 7 03:26:55.587: INFO: stdout: "true" +Mar 7 03:26:55.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods update-demo-nautilus-5lr2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Mar 7 03:26:55.743: INFO: stderr: "" +Mar 7 03:26:55.743: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" +Mar 7 03:26:55.743: INFO: validating pod update-demo-nautilus-5lr2n +Mar 7 03:26:55.747: INFO: got data: { + "image": "nautilus.jpg" +} + +Mar 7 03:26:55.747: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Mar 7 03:26:55.747: INFO: update-demo-nautilus-5lr2n is verified up and running +Mar 7 03:26:55.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods update-demo-nautilus-z7mvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Mar 7 03:26:55.904: INFO: stderr: "" +Mar 7 03:26:55.904: INFO: stdout: "true" +Mar 7 03:26:55.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods update-demo-nautilus-z7mvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Mar 7 03:26:56.059: INFO: stderr: "" +Mar 7 03:26:56.059: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" +Mar 7 03:26:56.059: INFO: validating pod update-demo-nautilus-z7mvr +Mar 7 03:26:56.063: INFO: got data: { + "image": "nautilus.jpg" +} + +Mar 7 03:26:56.063: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Mar 7 03:26:56.063: INFO: update-demo-nautilus-z7mvr is verified up and running +STEP: using delete to clean up resources 03/07/23 03:26:56.063 +Mar 7 03:26:56.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 delete --grace-period=0 --force -f -' +Mar 7 03:26:56.155: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Mar 7 03:26:56.155: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Mar 7 03:26:56.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get rc,svc -l name=update-demo --no-headers' +Mar 7 03:26:56.405: INFO: stderr: "No resources found in kubectl-9544 namespace.\n" +Mar 7 03:26:56.405: INFO: stdout: "" +Mar 7 03:26:56.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Mar 7 03:26:56.581: INFO: stderr: "" +Mar 7 03:26:56.581: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:26:56.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9544" for this suite. 03/07/23 03:26:56.585 +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","completed":130,"skipped":2193,"failed":0} +------------------------------ +• [SLOW TEST] [7.534 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Update Demo + test/e2e/kubectl/kubectl.go:322 + should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:337 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:26:49.113 + Mar 7 03:26:49.113: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:26:49.114 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:49.143 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:49.145 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:324 + [It] should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:337 + STEP: creating a replication controller 03/07/23 03:26:49.147 + Mar 7 03:26:49.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 create -f -' + Mar 7 03:26:49.905: INFO: stderr: "" + Mar 7 03:26:49.905: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" + STEP: waiting for all containers in name=update-demo pods to come up. 03/07/23 03:26:49.906 + Mar 7 03:26:49.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Mar 7 03:26:50.099: INFO: stderr: "" + Mar 7 03:26:50.099: INFO: stdout: "update-demo-nautilus-5lr2n update-demo-nautilus-z7mvr " + Mar 7 03:26:50.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods update-demo-nautilus-5lr2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Mar 7 03:26:50.267: INFO: stderr: "" + Mar 7 03:26:50.267: INFO: stdout: "" + Mar 7 03:26:50.267: INFO: update-demo-nautilus-5lr2n is created but not running + Mar 7 03:26:55.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Mar 7 03:26:55.430: INFO: stderr: "" + Mar 7 03:26:55.430: INFO: stdout: "update-demo-nautilus-5lr2n update-demo-nautilus-z7mvr " + Mar 7 03:26:55.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods update-demo-nautilus-5lr2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Mar 7 03:26:55.587: INFO: stderr: "" + Mar 7 03:26:55.587: INFO: stdout: "true" + Mar 7 03:26:55.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods update-demo-nautilus-5lr2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Mar 7 03:26:55.743: INFO: stderr: "" + Mar 7 03:26:55.743: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" + Mar 7 03:26:55.743: INFO: validating pod update-demo-nautilus-5lr2n + Mar 7 03:26:55.747: INFO: got data: { + "image": "nautilus.jpg" + } + + Mar 7 03:26:55.747: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Mar 7 03:26:55.747: INFO: update-demo-nautilus-5lr2n is verified up and running + Mar 7 03:26:55.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods update-demo-nautilus-z7mvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Mar 7 03:26:55.904: INFO: stderr: "" + Mar 7 03:26:55.904: INFO: stdout: "true" + Mar 7 03:26:55.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods update-demo-nautilus-z7mvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Mar 7 03:26:56.059: INFO: stderr: "" + Mar 7 03:26:56.059: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" + Mar 7 03:26:56.059: INFO: validating pod update-demo-nautilus-z7mvr + Mar 7 03:26:56.063: INFO: got data: { + "image": "nautilus.jpg" + } + + Mar 7 03:26:56.063: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Mar 7 03:26:56.063: INFO: update-demo-nautilus-z7mvr is verified up and running + STEP: using delete to clean up resources 03/07/23 03:26:56.063 + Mar 7 03:26:56.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 delete --grace-period=0 --force -f -' + Mar 7 03:26:56.155: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Mar 7 03:26:56.155: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" + Mar 7 03:26:56.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get rc,svc -l name=update-demo --no-headers' + Mar 7 03:26:56.405: INFO: stderr: "No resources found in kubectl-9544 namespace.\n" + Mar 7 03:26:56.405: INFO: stdout: "" + Mar 7 03:26:56.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-9544 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Mar 7 03:26:56.581: INFO: stderr: "" + Mar 7 03:26:56.581: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:26:56.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-9544" for this suite. 03/07/23 03:26:56.585 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:874 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:26:56.648 +Mar 7 03:26:56.649: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 03:26:56.65 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:56.662 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:56.665 +[It] should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:874 +STEP: Creating a ResourceQuota 03/07/23 03:26:56.667 +STEP: Getting a ResourceQuota 03/07/23 03:26:56.67 +STEP: Updating a ResourceQuota 03/07/23 03:26:56.674 +STEP: Verifying a ResourceQuota was modified 03/07/23 03:26:56.678 +STEP: Deleting a ResourceQuota 03/07/23 03:26:56.68 +STEP: Verifying the deleted ResourceQuota 03/07/23 03:26:56.685 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 03:26:56.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-5201" for this suite. 03/07/23 03:26:56.689 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","completed":131,"skipped":2201,"failed":0} +------------------------------ +• [0.046 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:874 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:26:56.648 + Mar 7 03:26:56.649: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 03:26:56.65 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:56.662 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:56.665 + [It] should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:874 + STEP: Creating a ResourceQuota 03/07/23 03:26:56.667 + STEP: Getting a ResourceQuota 03/07/23 03:26:56.67 + STEP: Updating a ResourceQuota 03/07/23 03:26:56.674 + STEP: Verifying a ResourceQuota was modified 03/07/23 03:26:56.678 + STEP: Deleting a ResourceQuota 03/07/23 03:26:56.68 + STEP: Verifying the deleted ResourceQuota 03/07/23 03:26:56.685 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 03:26:56.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-5201" for this suite. 03/07/23 03:26:56.689 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:165 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:26:56.694 +Mar 7 03:26:56.695: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:26:56.695 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:56.713 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:56.717 +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:165 +STEP: Creating a pod to test downward api env vars 03/07/23 03:26:56.719 +Mar 7 03:26:56.726: INFO: Waiting up to 5m0s for pod "downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69" in namespace "downward-api-8758" to be "Succeeded or Failed" +Mar 7 03:26:56.728: INFO: Pod "downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255216ms +Mar 7 03:26:58.732: INFO: Pod "downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006043827s +Mar 7 03:27:00.732: INFO: Pod "downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00543091s +STEP: Saw pod success 03/07/23 03:27:00.732 +Mar 7 03:27:00.732: INFO: Pod "downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69" satisfied condition "Succeeded or Failed" +Mar 7 03:27:00.734: INFO: Trying to get logs from node node-2 pod downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69 container dapi-container: +STEP: delete the pod 03/07/23 03:27:00.739 +Mar 7 03:27:00.747: INFO: Waiting for pod downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69 to disappear +Mar 7 03:27:00.749: INFO: Pod downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 +Mar 7 03:27:00.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8758" for this suite. 03/07/23 03:27:00.752 +{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","completed":132,"skipped":2207,"failed":0} +------------------------------ +• [4.062 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:165 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:26:56.694 + Mar 7 03:26:56.695: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:26:56.695 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:26:56.713 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:26:56.717 + [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:165 + STEP: Creating a pod to test downward api env vars 03/07/23 03:26:56.719 + Mar 7 03:26:56.726: INFO: Waiting up to 5m0s for pod "downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69" in namespace "downward-api-8758" to be "Succeeded or Failed" + Mar 7 03:26:56.728: INFO: Pod "downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255216ms + Mar 7 03:26:58.732: INFO: Pod "downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006043827s + Mar 7 03:27:00.732: INFO: Pod "downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00543091s + STEP: Saw pod success 03/07/23 03:27:00.732 + Mar 7 03:27:00.732: INFO: Pod "downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69" satisfied condition "Succeeded or Failed" + Mar 7 03:27:00.734: INFO: Trying to get logs from node node-2 pod downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69 container dapi-container: + STEP: delete the pod 03/07/23 03:27:00.739 + Mar 7 03:27:00.747: INFO: Waiting for pod downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69 to disappear + Mar 7 03:27:00.749: INFO: Pod downward-api-aa27ee0b-e79e-4216-b90d-06d5fbdf6c69 no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 + Mar 7 03:27:00.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-8758" for this suite. 03/07/23 03:27:00.752 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:204 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:27:00.759 +Mar 7 03:27:00.759: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:27:00.76 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:00.775 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:00.777 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:204 +STEP: Creating secret with name s-test-opt-del-97c944df-fcd8-4270-bf39-b14f61ae8367 03/07/23 03:27:00.782 +STEP: Creating secret with name s-test-opt-upd-cb70ebb3-1fce-4579-8d61-0ad005785ca1 03/07/23 03:27:00.786 +STEP: Creating the pod 03/07/23 03:27:00.789 +Mar 7 03:27:00.809: INFO: Waiting up to 5m0s for pod "pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9" in namespace "secrets-5460" to be "running and ready" +Mar 7 03:27:00.812: INFO: Pod "pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.976207ms +Mar 7 03:27:00.812: INFO: The phase of Pod pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:27:02.816: INFO: Pod "pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9": Phase="Running", Reason="", readiness=true. Elapsed: 2.007226181s +Mar 7 03:27:02.816: INFO: The phase of Pod pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9 is Running (Ready = true) +Mar 7 03:27:02.816: INFO: Pod "pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9" satisfied condition "running and ready" +STEP: Deleting secret s-test-opt-del-97c944df-fcd8-4270-bf39-b14f61ae8367 03/07/23 03:27:02.833 +STEP: Updating secret s-test-opt-upd-cb70ebb3-1fce-4579-8d61-0ad005785ca1 03/07/23 03:27:02.837 +STEP: Creating secret with name s-test-opt-create-ceb68c56-2bcd-461f-9b0a-302f0e9489aa 03/07/23 03:27:02.841 +STEP: waiting to observe update in volume 03/07/23 03:27:02.845 +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:27:06.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5460" for this suite. 03/07/23 03:27:06.875 +{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","completed":133,"skipped":2266,"failed":0} +------------------------------ +• [SLOW TEST] [6.122 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:204 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:27:00.759 + Mar 7 03:27:00.759: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:27:00.76 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:00.775 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:00.777 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:204 + STEP: Creating secret with name s-test-opt-del-97c944df-fcd8-4270-bf39-b14f61ae8367 03/07/23 03:27:00.782 + STEP: Creating secret with name s-test-opt-upd-cb70ebb3-1fce-4579-8d61-0ad005785ca1 03/07/23 03:27:00.786 + STEP: Creating the pod 03/07/23 03:27:00.789 + Mar 7 03:27:00.809: INFO: Waiting up to 5m0s for pod "pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9" in namespace "secrets-5460" to be "running and ready" + Mar 7 03:27:00.812: INFO: Pod "pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.976207ms + Mar 7 03:27:00.812: INFO: The phase of Pod pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:27:02.816: INFO: Pod "pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9": Phase="Running", Reason="", readiness=true. Elapsed: 2.007226181s + Mar 7 03:27:02.816: INFO: The phase of Pod pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9 is Running (Ready = true) + Mar 7 03:27:02.816: INFO: Pod "pod-secrets-e3f7535a-a98f-42d0-ae9c-9081eb69ffa9" satisfied condition "running and ready" + STEP: Deleting secret s-test-opt-del-97c944df-fcd8-4270-bf39-b14f61ae8367 03/07/23 03:27:02.833 + STEP: Updating secret s-test-opt-upd-cb70ebb3-1fce-4579-8d61-0ad005785ca1 03/07/23 03:27:02.837 + STEP: Creating secret with name s-test-opt-create-ceb68c56-2bcd-461f-9b0a-302f0e9489aa 03/07/23 03:27:02.841 + STEP: waiting to observe update in volume 03/07/23 03:27:02.845 + [AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:27:06.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-5460" for this suite. 03/07/23 03:27:06.875 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:168 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:27:06.881 +Mar 7 03:27:06.881: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:27:06.882 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:06.893 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:06.895 +[It] should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:168 +STEP: creating a ConfigMap 03/07/23 03:27:06.896 +STEP: fetching the ConfigMap 03/07/23 03:27:06.899 +STEP: patching the ConfigMap 03/07/23 03:27:06.903 +STEP: listing all ConfigMaps in all namespaces with a label selector 03/07/23 03:27:06.907 +STEP: deleting the ConfigMap by collection with a label selector 03/07/23 03:27:06.918 +STEP: listing all ConfigMaps in test namespace 03/07/23 03:27:06.926 +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:27:06.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9499" for this suite. 03/07/23 03:27:06.93 +{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","completed":134,"skipped":2279,"failed":0} +------------------------------ +• [0.053 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:168 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:27:06.881 + Mar 7 03:27:06.881: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:27:06.882 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:06.893 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:06.895 + [It] should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:168 + STEP: creating a ConfigMap 03/07/23 03:27:06.896 + STEP: fetching the ConfigMap 03/07/23 03:27:06.899 + STEP: patching the ConfigMap 03/07/23 03:27:06.903 + STEP: listing all ConfigMaps in all namespaces with a label selector 03/07/23 03:27:06.907 + STEP: deleting the ConfigMap by collection with a label selector 03/07/23 03:27:06.918 + STEP: listing all ConfigMaps in test namespace 03/07/23 03:27:06.926 + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:27:06.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-9499" for this suite. 03/07/23 03:27:06.93 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:216 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:27:06.934 +Mar 7 03:27:06.934: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:27:06.935 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:06.948 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:06.95 +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:216 +STEP: Creating a pod to test emptydir 0777 on node default medium 03/07/23 03:27:06.952 +Mar 7 03:27:06.958: INFO: Waiting up to 5m0s for pod "pod-a1fc468e-6261-45f8-a9e7-74a50f527057" in namespace "emptydir-6242" to be "Succeeded or Failed" +Mar 7 03:27:06.960: INFO: Pod "pod-a1fc468e-6261-45f8-a9e7-74a50f527057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40605ms +Mar 7 03:27:08.964: INFO: Pod "pod-a1fc468e-6261-45f8-a9e7-74a50f527057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005793407s +Mar 7 03:27:10.964: INFO: Pod "pod-a1fc468e-6261-45f8-a9e7-74a50f527057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006056763s +STEP: Saw pod success 03/07/23 03:27:10.964 +Mar 7 03:27:10.964: INFO: Pod "pod-a1fc468e-6261-45f8-a9e7-74a50f527057" satisfied condition "Succeeded or Failed" +Mar 7 03:27:10.967: INFO: Trying to get logs from node node-2 pod pod-a1fc468e-6261-45f8-a9e7-74a50f527057 container test-container: +STEP: delete the pod 03/07/23 03:27:10.971 +Mar 7 03:27:10.987: INFO: Waiting for pod pod-a1fc468e-6261-45f8-a9e7-74a50f527057 to disappear +Mar 7 03:27:10.989: INFO: Pod pod-a1fc468e-6261-45f8-a9e7-74a50f527057 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:27:10.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6242" for this suite. 03/07/23 03:27:10.992 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":135,"skipped":2287,"failed":0} +------------------------------ +• [4.062 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:216 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:27:06.934 + Mar 7 03:27:06.934: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:27:06.935 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:06.948 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:06.95 + [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:216 + STEP: Creating a pod to test emptydir 0777 on node default medium 03/07/23 03:27:06.952 + Mar 7 03:27:06.958: INFO: Waiting up to 5m0s for pod "pod-a1fc468e-6261-45f8-a9e7-74a50f527057" in namespace "emptydir-6242" to be "Succeeded or Failed" + Mar 7 03:27:06.960: INFO: Pod "pod-a1fc468e-6261-45f8-a9e7-74a50f527057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40605ms + Mar 7 03:27:08.964: INFO: Pod "pod-a1fc468e-6261-45f8-a9e7-74a50f527057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005793407s + Mar 7 03:27:10.964: INFO: Pod "pod-a1fc468e-6261-45f8-a9e7-74a50f527057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006056763s + STEP: Saw pod success 03/07/23 03:27:10.964 + Mar 7 03:27:10.964: INFO: Pod "pod-a1fc468e-6261-45f8-a9e7-74a50f527057" satisfied condition "Succeeded or Failed" + Mar 7 03:27:10.967: INFO: Trying to get logs from node node-2 pod pod-a1fc468e-6261-45f8-a9e7-74a50f527057 container test-container: + STEP: delete the pod 03/07/23 03:27:10.971 + Mar 7 03:27:10.987: INFO: Waiting for pod pod-a1fc468e-6261-45f8-a9e7-74a50f527057 to disappear + Mar 7 03:27:10.989: INFO: Pod pod-a1fc468e-6261-45f8-a9e7-74a50f527057 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:27:10.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-6242" for this suite. 03/07/23 03:27:10.992 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:68 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:27:10.997 +Mar 7 03:27:10.997: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:27:10.997 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:11.009 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:11.012 +[It] works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:68 +Mar 7 03:27:11.013: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: kubectl validation (kubectl create and apply) allows request with known and required properties 03/07/23 03:27:16.139 +Mar 7 03:27:16.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 create -f -' +Mar 7 03:27:17.083: INFO: stderr: "" +Mar 7 03:27:17.083: INFO: stdout: "e2e-test-crd-publish-openapi-8746-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Mar 7 03:27:17.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 delete e2e-test-crd-publish-openapi-8746-crds test-foo' +Mar 7 03:27:17.272: INFO: stderr: "" +Mar 7 03:27:17.272: INFO: stdout: "e2e-test-crd-publish-openapi-8746-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Mar 7 03:27:17.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 apply -f -' +Mar 7 03:27:18.001: INFO: stderr: "" +Mar 7 03:27:18.002: INFO: stdout: "e2e-test-crd-publish-openapi-8746-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Mar 7 03:27:18.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 delete e2e-test-crd-publish-openapi-8746-crds test-foo' +Mar 7 03:27:18.164: INFO: stderr: "" +Mar 7 03:27:18.164: INFO: stdout: "e2e-test-crd-publish-openapi-8746-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values 03/07/23 03:27:18.164 +Mar 7 03:27:18.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 create -f -' +Mar 7 03:27:18.905: INFO: rc: 1 +STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema 03/07/23 03:27:18.905 +Mar 7 03:27:18.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 create -f -' +Mar 7 03:27:19.128: INFO: rc: 1 +Mar 7 03:27:19.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 apply -f -' +Mar 7 03:27:19.359: INFO: rc: 1 +STEP: kubectl validation (kubectl create and apply) rejects request without required properties 03/07/23 03:27:19.359 +Mar 7 03:27:19.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 create -f -' +Mar 7 03:27:19.584: INFO: rc: 1 +Mar 7 03:27:19.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 apply -f -' +Mar 7 03:27:19.810: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties 03/07/23 03:27:19.81 +Mar 7 03:27:19.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 explain e2e-test-crd-publish-openapi-8746-crds' +Mar 7 03:27:20.094: INFO: stderr: "" +Mar 7 03:27:20.094: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8746-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively 03/07/23 03:27:20.094 +Mar 7 03:27:20.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 explain e2e-test-crd-publish-openapi-8746-crds.metadata' +Mar 7 03:27:20.395: INFO: stderr: "" +Mar 7 03:27:20.395: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8746-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Mar 7 03:27:20.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 explain e2e-test-crd-publish-openapi-8746-crds.spec' +Mar 7 03:27:20.700: INFO: stderr: "" +Mar 7 03:27:20.700: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8746-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Mar 7 03:27:20.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 explain e2e-test-crd-publish-openapi-8746-crds.spec.bars' +Mar 7 03:27:20.989: INFO: stderr: "" +Mar 7 03:27:20.989: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8746-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist 03/07/23 03:27:20.989 +Mar 7 03:27:20.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 explain e2e-test-crd-publish-openapi-8746-crds.spec.bars2' +Mar 7 03:27:21.280: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:27:24.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7209" for this suite. 03/07/23 03:27:24.971 +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","completed":136,"skipped":2288,"failed":0} +------------------------------ +• [SLOW TEST] [13.979 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:68 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:27:10.997 + Mar 7 03:27:10.997: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:27:10.997 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:11.009 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:11.012 + [It] works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:68 + Mar 7 03:27:11.013: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: kubectl validation (kubectl create and apply) allows request with known and required properties 03/07/23 03:27:16.139 + Mar 7 03:27:16.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 create -f -' + Mar 7 03:27:17.083: INFO: stderr: "" + Mar 7 03:27:17.083: INFO: stdout: "e2e-test-crd-publish-openapi-8746-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" + Mar 7 03:27:17.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 delete e2e-test-crd-publish-openapi-8746-crds test-foo' + Mar 7 03:27:17.272: INFO: stderr: "" + Mar 7 03:27:17.272: INFO: stdout: "e2e-test-crd-publish-openapi-8746-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" + Mar 7 03:27:17.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 apply -f -' + Mar 7 03:27:18.001: INFO: stderr: "" + Mar 7 03:27:18.002: INFO: stdout: "e2e-test-crd-publish-openapi-8746-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" + Mar 7 03:27:18.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 delete e2e-test-crd-publish-openapi-8746-crds test-foo' + Mar 7 03:27:18.164: INFO: stderr: "" + Mar 7 03:27:18.164: INFO: stdout: "e2e-test-crd-publish-openapi-8746-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" + STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values 03/07/23 03:27:18.164 + Mar 7 03:27:18.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 create -f -' + Mar 7 03:27:18.905: INFO: rc: 1 + STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema 03/07/23 03:27:18.905 + Mar 7 03:27:18.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 create -f -' + Mar 7 03:27:19.128: INFO: rc: 1 + Mar 7 03:27:19.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 apply -f -' + Mar 7 03:27:19.359: INFO: rc: 1 + STEP: kubectl validation (kubectl create and apply) rejects request without required properties 03/07/23 03:27:19.359 + Mar 7 03:27:19.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 create -f -' + Mar 7 03:27:19.584: INFO: rc: 1 + Mar 7 03:27:19.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 --namespace=crd-publish-openapi-7209 apply -f -' + Mar 7 03:27:19.810: INFO: rc: 1 + STEP: kubectl explain works to explain CR properties 03/07/23 03:27:19.81 + Mar 7 03:27:19.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 explain e2e-test-crd-publish-openapi-8746-crds' + Mar 7 03:27:20.094: INFO: stderr: "" + Mar 7 03:27:20.094: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8746-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" + STEP: kubectl explain works to explain CR properties recursively 03/07/23 03:27:20.094 + Mar 7 03:27:20.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 explain e2e-test-crd-publish-openapi-8746-crds.metadata' + Mar 7 03:27:20.395: INFO: stderr: "" + Mar 7 03:27:20.395: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8746-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" + Mar 7 03:27:20.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 explain e2e-test-crd-publish-openapi-8746-crds.spec' + Mar 7 03:27:20.700: INFO: stderr: "" + Mar 7 03:27:20.700: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8746-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" + Mar 7 03:27:20.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 explain e2e-test-crd-publish-openapi-8746-crds.spec.bars' + Mar 7 03:27:20.989: INFO: stderr: "" + Mar 7 03:27:20.989: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8746-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" + STEP: kubectl explain works to return error when explain is called on property that doesn't exist 03/07/23 03:27:20.989 + Mar 7 03:27:20.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=crd-publish-openapi-7209 explain e2e-test-crd-publish-openapi-8746-crds.spec.bars2' + Mar 7 03:27:21.280: INFO: rc: 1 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:27:24.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-publish-openapi-7209" for this suite. 03/07/23 03:27:24.971 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 +[BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:27:24.976 +Mar 7 03:27:24.976: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pod-network-test 03/07/23 03:27:24.977 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:24.991 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:24.993 +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 +STEP: Performing setup for networking test in namespace pod-network-test-8379 03/07/23 03:27:24.994 +STEP: creating a selector 03/07/23 03:27:24.994 +STEP: Creating the service pods in kubernetes 03/07/23 03:27:24.995 +Mar 7 03:27:24.995: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Mar 7 03:27:25.019: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-8379" to be "running and ready" +Mar 7 03:27:25.028: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.062483ms +Mar 7 03:27:25.028: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:27:27.032: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.012994811s +Mar 7 03:27:27.032: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:27:29.032: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.01292858s +Mar 7 03:27:29.032: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:27:31.033: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.013387129s +Mar 7 03:27:31.033: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:27:33.032: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.013149351s +Mar 7 03:27:33.032: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:27:35.032: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.013270276s +Mar 7 03:27:35.032: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:27:37.032: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.012825864s +Mar 7 03:27:37.032: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Mar 7 03:27:37.032: INFO: Pod "netserver-0" satisfied condition "running and ready" +Mar 7 03:27:37.035: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-8379" to be "running and ready" +Mar 7 03:27:37.037: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 1.928194ms +Mar 7 03:27:37.037: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Mar 7 03:27:37.037: INFO: Pod "netserver-1" satisfied condition "running and ready" +Mar 7 03:27:37.039: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-8379" to be "running and ready" +Mar 7 03:27:37.040: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 1.854726ms +Mar 7 03:27:37.040: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Mar 7 03:27:37.040: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 03/07/23 03:27:37.042 +Mar 7 03:27:37.046: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-8379" to be "running" +Mar 7 03:27:37.049: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.834728ms +Mar 7 03:27:39.052: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006657106s +Mar 7 03:27:39.052: INFO: Pod "test-container-pod" satisfied condition "running" +Mar 7 03:27:39.054: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Mar 7 03:27:39.054: INFO: Breadth first check of 10.233.132.115 on host 192.168.1.100... +Mar 7 03:27:39.056: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.45:9080/dial?request=hostname&protocol=http&host=10.233.132.115&port=8083&tries=1'] Namespace:pod-network-test-8379 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:27:39.056: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:27:39.057: INFO: ExecWithOptions: Clientset creation +Mar 7 03:27:39.057: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-8379/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.45%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.233.132.115%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Mar 7 03:27:39.114: INFO: Waiting for responses: map[] +Mar 7 03:27:39.114: INFO: reached 10.233.132.115 after 0/1 tries +Mar 7 03:27:39.114: INFO: Breadth first check of 10.233.84.148 on host 192.168.1.101... +Mar 7 03:27:39.117: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.45:9080/dial?request=hostname&protocol=http&host=10.233.84.148&port=8083&tries=1'] Namespace:pod-network-test-8379 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:27:39.117: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:27:39.117: INFO: ExecWithOptions: Clientset creation +Mar 7 03:27:39.117: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-8379/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.45%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.233.84.148%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Mar 7 03:27:39.180: INFO: Waiting for responses: map[] +Mar 7 03:27:39.180: INFO: reached 10.233.84.148 after 0/1 tries +Mar 7 03:27:39.180: INFO: Breadth first check of 10.233.247.46 on host 192.168.1.102... +Mar 7 03:27:39.184: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.45:9080/dial?request=hostname&protocol=http&host=10.233.247.46&port=8083&tries=1'] Namespace:pod-network-test-8379 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:27:39.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:27:39.184: INFO: ExecWithOptions: Clientset creation +Mar 7 03:27:39.184: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-8379/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.45%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.233.247.46%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Mar 7 03:27:39.244: INFO: Waiting for responses: map[] +Mar 7 03:27:39.244: INFO: reached 10.233.247.46 after 0/1 tries +Mar 7 03:27:39.244: INFO: Going to retry 0 out of 3 pods.... +[AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:187 +Mar 7 03:27:39.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-8379" for this suite. 03/07/23 03:27:39.252 +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","completed":137,"skipped":2303,"failed":0} +------------------------------ +• [SLOW TEST] [14.280 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:27:24.976 + Mar 7 03:27:24.976: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pod-network-test 03/07/23 03:27:24.977 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:24.991 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:24.993 + [It] should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 + STEP: Performing setup for networking test in namespace pod-network-test-8379 03/07/23 03:27:24.994 + STEP: creating a selector 03/07/23 03:27:24.994 + STEP: Creating the service pods in kubernetes 03/07/23 03:27:24.995 + Mar 7 03:27:24.995: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Mar 7 03:27:25.019: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-8379" to be "running and ready" + Mar 7 03:27:25.028: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.062483ms + Mar 7 03:27:25.028: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:27:27.032: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.012994811s + Mar 7 03:27:27.032: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:27:29.032: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.01292858s + Mar 7 03:27:29.032: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:27:31.033: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.013387129s + Mar 7 03:27:31.033: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:27:33.032: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.013149351s + Mar 7 03:27:33.032: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:27:35.032: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.013270276s + Mar 7 03:27:35.032: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:27:37.032: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.012825864s + Mar 7 03:27:37.032: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Mar 7 03:27:37.032: INFO: Pod "netserver-0" satisfied condition "running and ready" + Mar 7 03:27:37.035: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-8379" to be "running and ready" + Mar 7 03:27:37.037: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 1.928194ms + Mar 7 03:27:37.037: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Mar 7 03:27:37.037: INFO: Pod "netserver-1" satisfied condition "running and ready" + Mar 7 03:27:37.039: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-8379" to be "running and ready" + Mar 7 03:27:37.040: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 1.854726ms + Mar 7 03:27:37.040: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Mar 7 03:27:37.040: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 03/07/23 03:27:37.042 + Mar 7 03:27:37.046: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-8379" to be "running" + Mar 7 03:27:37.049: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.834728ms + Mar 7 03:27:39.052: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006657106s + Mar 7 03:27:39.052: INFO: Pod "test-container-pod" satisfied condition "running" + Mar 7 03:27:39.054: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Mar 7 03:27:39.054: INFO: Breadth first check of 10.233.132.115 on host 192.168.1.100... + Mar 7 03:27:39.056: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.45:9080/dial?request=hostname&protocol=http&host=10.233.132.115&port=8083&tries=1'] Namespace:pod-network-test-8379 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:27:39.056: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:27:39.057: INFO: ExecWithOptions: Clientset creation + Mar 7 03:27:39.057: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-8379/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.45%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.233.132.115%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Mar 7 03:27:39.114: INFO: Waiting for responses: map[] + Mar 7 03:27:39.114: INFO: reached 10.233.132.115 after 0/1 tries + Mar 7 03:27:39.114: INFO: Breadth first check of 10.233.84.148 on host 192.168.1.101... + Mar 7 03:27:39.117: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.45:9080/dial?request=hostname&protocol=http&host=10.233.84.148&port=8083&tries=1'] Namespace:pod-network-test-8379 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:27:39.117: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:27:39.117: INFO: ExecWithOptions: Clientset creation + Mar 7 03:27:39.117: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-8379/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.45%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.233.84.148%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Mar 7 03:27:39.180: INFO: Waiting for responses: map[] + Mar 7 03:27:39.180: INFO: reached 10.233.84.148 after 0/1 tries + Mar 7 03:27:39.180: INFO: Breadth first check of 10.233.247.46 on host 192.168.1.102... + Mar 7 03:27:39.184: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.45:9080/dial?request=hostname&protocol=http&host=10.233.247.46&port=8083&tries=1'] Namespace:pod-network-test-8379 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:27:39.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:27:39.184: INFO: ExecWithOptions: Clientset creation + Mar 7 03:27:39.184: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-8379/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.45%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.233.247.46%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Mar 7 03:27:39.244: INFO: Waiting for responses: map[] + Mar 7 03:27:39.244: INFO: reached 10.233.247.46 after 0/1 tries + Mar 7 03:27:39.244: INFO: Going to retry 0 out of 3 pods.... + [AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:187 + Mar 7 03:27:39.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pod-network-test-8379" for this suite. 03/07/23 03:27:39.252 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:208 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:27:39.257 +Mar 7 03:27:39.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:27:39.258 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:39.271 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:39.273 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:27:39.285 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:27:39.496 +STEP: Deploying the webhook pod 03/07/23 03:27:39.5 +STEP: Wait for the deployment to be ready 03/07/23 03:27:39.508 +Mar 7 03:27:39.514: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 03:27:41.522 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:27:41.534 +Mar 7 03:27:42.536: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:208 +STEP: Registering the webhook via the AdmissionRegistration API 03/07/23 03:27:42.539 +STEP: create a pod 03/07/23 03:27:42.55 +Mar 7 03:27:42.555: INFO: Waiting up to 5m0s for pod "to-be-attached-pod" in namespace "webhook-676" to be "running" +Mar 7 03:27:42.557: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398435ms +Mar 7 03:27:44.561: INFO: Pod "to-be-attached-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.005950204s +Mar 7 03:27:44.561: INFO: Pod "to-be-attached-pod" satisfied condition "running" +STEP: 'kubectl attach' the pod, should be denied by the webhook 03/07/23 03:27:44.561 +Mar 7 03:27:44.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=webhook-676 attach --namespace=webhook-676 to-be-attached-pod -i -c=container1' +Mar 7 03:27:44.764: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:27:44.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-676" for this suite. 03/07/23 03:27:44.784 +STEP: Destroying namespace "webhook-676-markers" for this suite. 03/07/23 03:27:44.791 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","completed":138,"skipped":2306,"failed":0} +------------------------------ +• [SLOW TEST] [5.603 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:208 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:27:39.257 + Mar 7 03:27:39.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:27:39.258 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:39.271 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:39.273 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:27:39.285 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:27:39.496 + STEP: Deploying the webhook pod 03/07/23 03:27:39.5 + STEP: Wait for the deployment to be ready 03/07/23 03:27:39.508 + Mar 7 03:27:39.514: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 03:27:41.522 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:27:41.534 + Mar 7 03:27:42.536: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:208 + STEP: Registering the webhook via the AdmissionRegistration API 03/07/23 03:27:42.539 + STEP: create a pod 03/07/23 03:27:42.55 + Mar 7 03:27:42.555: INFO: Waiting up to 5m0s for pod "to-be-attached-pod" in namespace "webhook-676" to be "running" + Mar 7 03:27:42.557: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398435ms + Mar 7 03:27:44.561: INFO: Pod "to-be-attached-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.005950204s + Mar 7 03:27:44.561: INFO: Pod "to-be-attached-pod" satisfied condition "running" + STEP: 'kubectl attach' the pod, should be denied by the webhook 03/07/23 03:27:44.561 + Mar 7 03:27:44.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=webhook-676 attach --namespace=webhook-676 to-be-attached-pod -i -c=container1' + Mar 7 03:27:44.764: INFO: rc: 1 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:27:44.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-676" for this suite. 03/07/23 03:27:44.784 + STEP: Destroying namespace "webhook-676-markers" for this suite. 03/07/23 03:27:44.791 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:248 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:27:44.861 +Mar 7 03:27:44.861: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:27:44.862 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:44.882 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:44.885 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:248 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:27:44.891 +Mar 7 03:27:44.905: INFO: Waiting up to 5m0s for pod "downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1" in namespace "projected-5025" to be "Succeeded or Failed" +Mar 7 03:27:44.910: INFO: Pod "downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.692164ms +Mar 7 03:27:46.914: INFO: Pod "downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009374305s +Mar 7 03:27:48.915: INFO: Pod "downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009992068s +STEP: Saw pod success 03/07/23 03:27:48.915 +Mar 7 03:27:48.915: INFO: Pod "downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1" satisfied condition "Succeeded or Failed" +Mar 7 03:27:48.917: INFO: Trying to get logs from node node-2 pod downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1 container client-container: +STEP: delete the pod 03/07/23 03:27:48.922 +Mar 7 03:27:48.931: INFO: Waiting for pod downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1 to disappear +Mar 7 03:27:48.933: INFO: Pod downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 03:27:48.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5025" for this suite. 03/07/23 03:27:48.936 +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","completed":139,"skipped":2306,"failed":0} +------------------------------ +• [4.079 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:248 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:27:44.861 + Mar 7 03:27:44.861: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:27:44.862 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:44.882 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:44.885 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:248 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:27:44.891 + Mar 7 03:27:44.905: INFO: Waiting up to 5m0s for pod "downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1" in namespace "projected-5025" to be "Succeeded or Failed" + Mar 7 03:27:44.910: INFO: Pod "downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.692164ms + Mar 7 03:27:46.914: INFO: Pod "downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009374305s + Mar 7 03:27:48.915: INFO: Pod "downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009992068s + STEP: Saw pod success 03/07/23 03:27:48.915 + Mar 7 03:27:48.915: INFO: Pod "downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1" satisfied condition "Succeeded or Failed" + Mar 7 03:27:48.917: INFO: Trying to get logs from node node-2 pod downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1 container client-container: + STEP: delete the pod 03/07/23 03:27:48.922 + Mar 7 03:27:48.931: INFO: Waiting for pod downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1 to disappear + Mar 7 03:27:48.933: INFO: Pod downwardapi-volume-136be7e8-d81e-42ed-9113-cd671b7ab3d1 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 03:27:48.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-5025" for this suite. 03/07/23 03:27:48.936 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:150 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:27:48.941 +Mar 7 03:27:48.941: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 03:27:48.942 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:48.955 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:48.96 +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:150 +STEP: Discovering how many secrets are in namespace by default 03/07/23 03:27:48.963 +STEP: Counting existing ResourceQuota 03/07/23 03:27:53.967 +STEP: Creating a ResourceQuota 03/07/23 03:27:58.97 +STEP: Ensuring resource quota status is calculated 03/07/23 03:27:58.991 +STEP: Creating a Secret 03/07/23 03:28:00.994 +STEP: Ensuring resource quota status captures secret creation 03/07/23 03:28:01.014 +STEP: Deleting a secret 03/07/23 03:28:03.017 +STEP: Ensuring resource quota status released usage 03/07/23 03:28:03.038 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 03:28:05.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-720" for this suite. 03/07/23 03:28:05.045 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","completed":140,"skipped":2310,"failed":0} +------------------------------ +• [SLOW TEST] [16.110 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:150 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:27:48.941 + Mar 7 03:27:48.941: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 03:27:48.942 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:27:48.955 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:27:48.96 + [It] should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:150 + STEP: Discovering how many secrets are in namespace by default 03/07/23 03:27:48.963 + STEP: Counting existing ResourceQuota 03/07/23 03:27:53.967 + STEP: Creating a ResourceQuota 03/07/23 03:27:58.97 + STEP: Ensuring resource quota status is calculated 03/07/23 03:27:58.991 + STEP: Creating a Secret 03/07/23 03:28:00.994 + STEP: Ensuring resource quota status captures secret creation 03/07/23 03:28:01.014 + STEP: Deleting a secret 03/07/23 03:28:03.017 + STEP: Ensuring resource quota status released usage 03/07/23 03:28:03.038 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 03:28:05.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-720" for this suite. 03/07/23 03:28:05.045 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1507 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:28:05.055 +Mar 7 03:28:05.055: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:28:05.056 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:28:05.067 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:28:05.069 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[BeforeEach] Kubectl label + test/e2e/kubectl/kubectl.go:1492 +STEP: creating the pod 03/07/23 03:28:05.07 +Mar 7 03:28:05.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 create -f -' +Mar 7 03:28:05.953: INFO: stderr: "" +Mar 7 03:28:05.953: INFO: stdout: "pod/pause created\n" +Mar 7 03:28:05.953: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Mar 7 03:28:05.953: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5970" to be "running and ready" +Mar 7 03:28:05.956: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.171296ms +Mar 7 03:28:05.956: INFO: Error evaluating pod condition running and ready: want pod 'pause' on 'node-2' to be 'Running' but was 'Pending' +Mar 7 03:28:07.959: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.006680659s +Mar 7 03:28:07.959: INFO: Pod "pause" satisfied condition "running and ready" +Mar 7 03:28:07.959: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1507 +STEP: adding the label testing-label with value testing-label-value to a pod 03/07/23 03:28:07.96 +Mar 7 03:28:07.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 label pods pause testing-label=testing-label-value' +Mar 7 03:28:08.129: INFO: stderr: "" +Mar 7 03:28:08.129: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value 03/07/23 03:28:08.129 +Mar 7 03:28:08.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 get pod pause -L testing-label' +Mar 7 03:28:08.291: INFO: stderr: "" +Mar 7 03:28:08.291: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s testing-label-value\n" +STEP: removing the label testing-label of a pod 03/07/23 03:28:08.291 +Mar 7 03:28:08.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 label pods pause testing-label-' +Mar 7 03:28:08.480: INFO: stderr: "" +Mar 7 03:28:08.480: INFO: stdout: "pod/pause unlabeled\n" +STEP: verifying the pod doesn't have the label testing-label 03/07/23 03:28:08.48 +Mar 7 03:28:08.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 get pod pause -L testing-label' +Mar 7 03:28:08.641: INFO: stderr: "" +Mar 7 03:28:08.641: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" +[AfterEach] Kubectl label + test/e2e/kubectl/kubectl.go:1498 +STEP: using delete to clean up resources 03/07/23 03:28:08.641 +Mar 7 03:28:08.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 delete --grace-period=0 --force -f -' +Mar 7 03:28:08.747: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Mar 7 03:28:08.747: INFO: stdout: "pod \"pause\" force deleted\n" +Mar 7 03:28:08.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 get rc,svc -l name=pause --no-headers' +Mar 7 03:28:08.996: INFO: stderr: "No resources found in kubectl-5970 namespace.\n" +Mar 7 03:28:08.996: INFO: stdout: "" +Mar 7 03:28:08.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Mar 7 03:28:09.173: INFO: stderr: "" +Mar 7 03:28:09.173: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:28:09.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5970" for this suite. 03/07/23 03:28:09.178 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","completed":141,"skipped":2409,"failed":0} +------------------------------ +• [4.129 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl label + test/e2e/kubectl/kubectl.go:1490 + should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1507 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:28:05.055 + Mar 7 03:28:05.055: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:28:05.056 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:28:05.067 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:28:05.069 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [BeforeEach] Kubectl label + test/e2e/kubectl/kubectl.go:1492 + STEP: creating the pod 03/07/23 03:28:05.07 + Mar 7 03:28:05.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 create -f -' + Mar 7 03:28:05.953: INFO: stderr: "" + Mar 7 03:28:05.953: INFO: stdout: "pod/pause created\n" + Mar 7 03:28:05.953: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] + Mar 7 03:28:05.953: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5970" to be "running and ready" + Mar 7 03:28:05.956: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.171296ms + Mar 7 03:28:05.956: INFO: Error evaluating pod condition running and ready: want pod 'pause' on 'node-2' to be 'Running' but was 'Pending' + Mar 7 03:28:07.959: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.006680659s + Mar 7 03:28:07.959: INFO: Pod "pause" satisfied condition "running and ready" + Mar 7 03:28:07.959: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] + [It] should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1507 + STEP: adding the label testing-label with value testing-label-value to a pod 03/07/23 03:28:07.96 + Mar 7 03:28:07.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 label pods pause testing-label=testing-label-value' + Mar 7 03:28:08.129: INFO: stderr: "" + Mar 7 03:28:08.129: INFO: stdout: "pod/pause labeled\n" + STEP: verifying the pod has the label testing-label with the value testing-label-value 03/07/23 03:28:08.129 + Mar 7 03:28:08.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 get pod pause -L testing-label' + Mar 7 03:28:08.291: INFO: stderr: "" + Mar 7 03:28:08.291: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s testing-label-value\n" + STEP: removing the label testing-label of a pod 03/07/23 03:28:08.291 + Mar 7 03:28:08.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 label pods pause testing-label-' + Mar 7 03:28:08.480: INFO: stderr: "" + Mar 7 03:28:08.480: INFO: stdout: "pod/pause unlabeled\n" + STEP: verifying the pod doesn't have the label testing-label 03/07/23 03:28:08.48 + Mar 7 03:28:08.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 get pod pause -L testing-label' + Mar 7 03:28:08.641: INFO: stderr: "" + Mar 7 03:28:08.641: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" + [AfterEach] Kubectl label + test/e2e/kubectl/kubectl.go:1498 + STEP: using delete to clean up resources 03/07/23 03:28:08.641 + Mar 7 03:28:08.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 delete --grace-period=0 --force -f -' + Mar 7 03:28:08.747: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Mar 7 03:28:08.747: INFO: stdout: "pod \"pause\" force deleted\n" + Mar 7 03:28:08.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 get rc,svc -l name=pause --no-headers' + Mar 7 03:28:08.996: INFO: stderr: "No resources found in kubectl-5970 namespace.\n" + Mar 7 03:28:08.996: INFO: stdout: "" + Mar 7 03:28:08.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5970 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Mar 7 03:28:09.173: INFO: stderr: "" + Mar 7 03:28:09.173: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:28:09.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-5970" for this suite. 03/07/23 03:28:09.178 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:196 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:28:09.184 +Mar 7 03:28:09.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:28:09.185 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:28:09.205 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:28:09.207 +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:196 +STEP: Creating a pod to test emptydir 0644 on node default medium 03/07/23 03:28:09.209 +Mar 7 03:28:09.215: INFO: Waiting up to 5m0s for pod "pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7" in namespace "emptydir-7005" to be "Succeeded or Failed" +Mar 7 03:28:09.222: INFO: Pod "pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.475791ms +Mar 7 03:28:11.226: INFO: Pod "pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010331674s +Mar 7 03:28:13.227: INFO: Pod "pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011360964s +STEP: Saw pod success 03/07/23 03:28:13.227 +Mar 7 03:28:13.227: INFO: Pod "pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7" satisfied condition "Succeeded or Failed" +Mar 7 03:28:13.229: INFO: Trying to get logs from node node-2 pod pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7 container test-container: +STEP: delete the pod 03/07/23 03:28:13.235 +Mar 7 03:28:13.245: INFO: Waiting for pod pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7 to disappear +Mar 7 03:28:13.249: INFO: Pod pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:28:13.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7005" for this suite. 03/07/23 03:28:13.253 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":142,"skipped":2413,"failed":0} +------------------------------ +• [4.072 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:196 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:28:09.184 + Mar 7 03:28:09.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:28:09.185 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:28:09.205 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:28:09.207 + [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:196 + STEP: Creating a pod to test emptydir 0644 on node default medium 03/07/23 03:28:09.209 + Mar 7 03:28:09.215: INFO: Waiting up to 5m0s for pod "pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7" in namespace "emptydir-7005" to be "Succeeded or Failed" + Mar 7 03:28:09.222: INFO: Pod "pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.475791ms + Mar 7 03:28:11.226: INFO: Pod "pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010331674s + Mar 7 03:28:13.227: INFO: Pod "pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011360964s + STEP: Saw pod success 03/07/23 03:28:13.227 + Mar 7 03:28:13.227: INFO: Pod "pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7" satisfied condition "Succeeded or Failed" + Mar 7 03:28:13.229: INFO: Trying to get logs from node node-2 pod pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7 container test-container: + STEP: delete the pod 03/07/23 03:28:13.235 + Mar 7 03:28:13.245: INFO: Waiting for pod pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7 to disappear + Mar 7 03:28:13.249: INFO: Pod pod-bb328703-65e0-4484-9cff-bfb2cf1ebef7 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:28:13.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-7005" for this suite. 03/07/23 03:28:13.253 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:131 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:28:13.257 +Mar 7 03:28:13.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-probe 03/07/23 03:28:13.258 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:28:13.276 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:28:13.284 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:131 +STEP: Creating pod busybox-184bab96-4163-4ac7-87c8-c06507e84303 in namespace container-probe-9993 03/07/23 03:28:13.286 +Mar 7 03:28:13.292: INFO: Waiting up to 5m0s for pod "busybox-184bab96-4163-4ac7-87c8-c06507e84303" in namespace "container-probe-9993" to be "not pending" +Mar 7 03:28:13.295: INFO: Pod "busybox-184bab96-4163-4ac7-87c8-c06507e84303": Phase="Pending", Reason="", readiness=false. Elapsed: 2.715333ms +Mar 7 03:28:15.299: INFO: Pod "busybox-184bab96-4163-4ac7-87c8-c06507e84303": Phase="Running", Reason="", readiness=true. Elapsed: 2.00640108s +Mar 7 03:28:15.299: INFO: Pod "busybox-184bab96-4163-4ac7-87c8-c06507e84303" satisfied condition "not pending" +Mar 7 03:28:15.299: INFO: Started pod busybox-184bab96-4163-4ac7-87c8-c06507e84303 in namespace container-probe-9993 +STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 03:28:15.299 +Mar 7 03:28:15.301: INFO: Initial restart count of pod busybox-184bab96-4163-4ac7-87c8-c06507e84303 is 0 +Mar 7 03:29:05.467: INFO: Restart count of pod container-probe-9993/busybox-184bab96-4163-4ac7-87c8-c06507e84303 is now 1 (50.166238626s elapsed) +STEP: deleting the pod 03/07/23 03:29:05.467 +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +Mar 7 03:29:05.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9993" for this suite. 03/07/23 03:29:05.558 +{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","completed":143,"skipped":2420,"failed":0} +------------------------------ +• [SLOW TEST] [52.319 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:131 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:28:13.257 + Mar 7 03:28:13.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-probe 03/07/23 03:28:13.258 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:28:13.276 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:28:13.284 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 + [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:131 + STEP: Creating pod busybox-184bab96-4163-4ac7-87c8-c06507e84303 in namespace container-probe-9993 03/07/23 03:28:13.286 + Mar 7 03:28:13.292: INFO: Waiting up to 5m0s for pod "busybox-184bab96-4163-4ac7-87c8-c06507e84303" in namespace "container-probe-9993" to be "not pending" + Mar 7 03:28:13.295: INFO: Pod "busybox-184bab96-4163-4ac7-87c8-c06507e84303": Phase="Pending", Reason="", readiness=false. Elapsed: 2.715333ms + Mar 7 03:28:15.299: INFO: Pod "busybox-184bab96-4163-4ac7-87c8-c06507e84303": Phase="Running", Reason="", readiness=true. Elapsed: 2.00640108s + Mar 7 03:28:15.299: INFO: Pod "busybox-184bab96-4163-4ac7-87c8-c06507e84303" satisfied condition "not pending" + Mar 7 03:28:15.299: INFO: Started pod busybox-184bab96-4163-4ac7-87c8-c06507e84303 in namespace container-probe-9993 + STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 03:28:15.299 + Mar 7 03:28:15.301: INFO: Initial restart count of pod busybox-184bab96-4163-4ac7-87c8-c06507e84303 is 0 + Mar 7 03:29:05.467: INFO: Restart count of pod container-probe-9993/busybox-184bab96-4163-4ac7-87c8-c06507e84303 is now 1 (50.166238626s elapsed) + STEP: deleting the pod 03/07/23 03:29:05.467 + [AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 + Mar 7 03:29:05.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-probe-9993" for this suite. 03/07/23 03:29:05.558 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:929 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:29:05.576 +Mar 7 03:29:05.576: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:29:05.577 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:05.589 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:05.591 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:929 +STEP: create deployment with httpd image 03/07/23 03:29:05.592 +Mar 7 03:29:05.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-8910 create -f -' +Mar 7 03:29:05.826: INFO: stderr: "" +Mar 7 03:29:05.826: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image 03/07/23 03:29:05.826 +Mar 7 03:29:05.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-8910 diff -f -' +Mar 7 03:29:06.077: INFO: rc: 1 +Mar 7 03:29:06.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-8910 delete -f -' +Mar 7 03:29:06.170: INFO: stderr: "" +Mar 7 03:29:06.170: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:29:06.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8910" for this suite. 03/07/23 03:29:06.175 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","completed":144,"skipped":2422,"failed":0} +------------------------------ +• [0.605 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl diff + test/e2e/kubectl/kubectl.go:923 + should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:929 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:29:05.576 + Mar 7 03:29:05.576: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:29:05.577 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:05.589 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:05.591 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:929 + STEP: create deployment with httpd image 03/07/23 03:29:05.592 + Mar 7 03:29:05.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-8910 create -f -' + Mar 7 03:29:05.826: INFO: stderr: "" + Mar 7 03:29:05.826: INFO: stdout: "deployment.apps/httpd-deployment created\n" + STEP: verify diff finds difference between live and declared image 03/07/23 03:29:05.826 + Mar 7 03:29:05.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-8910 diff -f -' + Mar 7 03:29:06.077: INFO: rc: 1 + Mar 7 03:29:06.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-8910 delete -f -' + Mar 7 03:29:06.170: INFO: stderr: "" + Mar 7 03:29:06.170: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:29:06.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-8910" for this suite. 03/07/23 03:29:06.175 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] RuntimeClass + should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:29:06.182 +Mar 7 03:29:06.182: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename runtimeclass 03/07/23 03:29:06.182 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:06.201 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:06.203 +[It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 +Mar 7 03:29:06.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-7379" for this suite. 03/07/23 03:29:06.214 +{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]","completed":145,"skipped":2422,"failed":0} +------------------------------ +• [0.038 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:29:06.182 + Mar 7 03:29:06.182: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename runtimeclass 03/07/23 03:29:06.182 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:06.201 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:06.203 + [It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 + Mar 7 03:29:06.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "runtimeclass-7379" for this suite. 03/07/23 03:29:06.214 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1590 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:29:06.221 +Mar 7 03:29:06.221: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:29:06.222 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:06.238 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:06.24 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[BeforeEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1570 +STEP: creating an pod 03/07/23 03:29:06.242 +Mar 7 03:29:06.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 run logs-generator --image=registry.k8s.io/e2e-test-images/agnhost:2.40 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Mar 7 03:29:06.346: INFO: stderr: "" +Mar 7 03:29:06.346: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1590 +STEP: Waiting for log generator to start. 03/07/23 03:29:06.346 +Mar 7 03:29:06.347: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Mar 7 03:29:06.347: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7023" to be "running and ready, or succeeded" +Mar 7 03:29:06.402: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 55.322754ms +Mar 7 03:29:06.402: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on '' to be 'Running' but was 'Pending' +Mar 7 03:29:08.406: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.059186466s +Mar 7 03:29:08.406: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Mar 7 03:29:08.406: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings 03/07/23 03:29:08.406 +Mar 7 03:29:08.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator' +Mar 7 03:29:08.542: INFO: stderr: "" +Mar 7 03:29:08.542: INFO: stdout: "I0307 03:29:07.182206 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/n4ff 352\nI0307 03:29:07.382410 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/cpq 200\nI0307 03:29:07.582912 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/l6ts 436\nI0307 03:29:07.783312 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/c7f 304\nI0307 03:29:07.982575 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/xs7t 319\nI0307 03:29:08.182987 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/mbp 501\nI0307 03:29:08.382285 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/f6wc 519\n" +STEP: limiting log lines 03/07/23 03:29:08.542 +Mar 7 03:29:08.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator --tail=1' +Mar 7 03:29:08.678: INFO: stderr: "" +Mar 7 03:29:08.678: INFO: stdout: "I0307 03:29:08.582638 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/4wj8 293\n" +Mar 7 03:29:08.678: INFO: got output "I0307 03:29:08.582638 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/4wj8 293\n" +STEP: limiting log bytes 03/07/23 03:29:08.678 +Mar 7 03:29:08.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator --limit-bytes=1' +Mar 7 03:29:08.806: INFO: stderr: "" +Mar 7 03:29:08.806: INFO: stdout: "I" +Mar 7 03:29:08.806: INFO: got output "I" +STEP: exposing timestamps 03/07/23 03:29:08.806 +Mar 7 03:29:08.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator --tail=1 --timestamps' +Mar 7 03:29:08.957: INFO: stderr: "" +Mar 7 03:29:08.957: INFO: stdout: "2023-03-07T03:29:08.783131763Z I0307 03:29:08.782995 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/6lm 531\n" +Mar 7 03:29:08.957: INFO: got output "2023-03-07T03:29:08.783131763Z I0307 03:29:08.782995 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/6lm 531\n" +STEP: restricting to a time range 03/07/23 03:29:08.957 +Mar 7 03:29:11.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator --since=1s' +Mar 7 03:29:11.620: INFO: stderr: "" +Mar 7 03:29:11.620: INFO: stdout: "I0307 03:29:10.782757 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/h5l8 507\nI0307 03:29:10.982928 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/pvm 590\nI0307 03:29:11.183146 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/dxfm 312\nI0307 03:29:11.383302 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/gtx 252\nI0307 03:29:11.583160 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/45zm 572\n" +Mar 7 03:29:11.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator --since=24h' +Mar 7 03:29:11.746: INFO: stderr: "" +Mar 7 03:29:11.746: INFO: stdout: "I0307 03:29:07.182206 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/n4ff 352\nI0307 03:29:07.382410 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/cpq 200\nI0307 03:29:07.582912 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/l6ts 436\nI0307 03:29:07.783312 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/c7f 304\nI0307 03:29:07.982575 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/xs7t 319\nI0307 03:29:08.182987 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/mbp 501\nI0307 03:29:08.382285 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/f6wc 519\nI0307 03:29:08.582638 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/4wj8 293\nI0307 03:29:08.782995 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/6lm 531\nI0307 03:29:08.982317 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/t886 563\nI0307 03:29:09.182779 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/95b 586\nI0307 03:29:09.383120 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/2dk 477\nI0307 03:29:09.582352 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/f4h 457\nI0307 03:29:09.782757 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/pvh9 483\nI0307 03:29:09.983148 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/lsw 418\nI0307 03:29:10.182324 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/px4x 336\nI0307 03:29:10.382755 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/vjd 235\nI0307 03:29:10.582556 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/s69 473\nI0307 03:29:10.782757 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/h5l8 507\nI0307 03:29:10.982928 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/pvm 590\nI0307 03:29:11.183146 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/dxfm 312\nI0307 03:29:11.383302 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/gtx 252\nI0307 03:29:11.583160 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/45zm 572\n" +[AfterEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1575 +Mar 7 03:29:11.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 delete pod logs-generator' +Mar 7 03:29:12.920: INFO: stderr: "" +Mar 7 03:29:12.920: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:29:12.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7023" for this suite. 03/07/23 03:29:12.924 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","completed":146,"skipped":2435,"failed":0} +------------------------------ +• [SLOW TEST] [6.708 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl logs + test/e2e/kubectl/kubectl.go:1567 + should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1590 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:29:06.221 + Mar 7 03:29:06.221: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:29:06.222 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:06.238 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:06.24 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [BeforeEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1570 + STEP: creating an pod 03/07/23 03:29:06.242 + Mar 7 03:29:06.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 run logs-generator --image=registry.k8s.io/e2e-test-images/agnhost:2.40 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' + Mar 7 03:29:06.346: INFO: stderr: "" + Mar 7 03:29:06.346: INFO: stdout: "pod/logs-generator created\n" + [It] should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1590 + STEP: Waiting for log generator to start. 03/07/23 03:29:06.346 + Mar 7 03:29:06.347: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] + Mar 7 03:29:06.347: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7023" to be "running and ready, or succeeded" + Mar 7 03:29:06.402: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 55.322754ms + Mar 7 03:29:06.402: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on '' to be 'Running' but was 'Pending' + Mar 7 03:29:08.406: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.059186466s + Mar 7 03:29:08.406: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" + Mar 7 03:29:08.406: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] + STEP: checking for a matching strings 03/07/23 03:29:08.406 + Mar 7 03:29:08.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator' + Mar 7 03:29:08.542: INFO: stderr: "" + Mar 7 03:29:08.542: INFO: stdout: "I0307 03:29:07.182206 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/n4ff 352\nI0307 03:29:07.382410 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/cpq 200\nI0307 03:29:07.582912 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/l6ts 436\nI0307 03:29:07.783312 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/c7f 304\nI0307 03:29:07.982575 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/xs7t 319\nI0307 03:29:08.182987 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/mbp 501\nI0307 03:29:08.382285 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/f6wc 519\n" + STEP: limiting log lines 03/07/23 03:29:08.542 + Mar 7 03:29:08.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator --tail=1' + Mar 7 03:29:08.678: INFO: stderr: "" + Mar 7 03:29:08.678: INFO: stdout: "I0307 03:29:08.582638 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/4wj8 293\n" + Mar 7 03:29:08.678: INFO: got output "I0307 03:29:08.582638 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/4wj8 293\n" + STEP: limiting log bytes 03/07/23 03:29:08.678 + Mar 7 03:29:08.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator --limit-bytes=1' + Mar 7 03:29:08.806: INFO: stderr: "" + Mar 7 03:29:08.806: INFO: stdout: "I" + Mar 7 03:29:08.806: INFO: got output "I" + STEP: exposing timestamps 03/07/23 03:29:08.806 + Mar 7 03:29:08.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator --tail=1 --timestamps' + Mar 7 03:29:08.957: INFO: stderr: "" + Mar 7 03:29:08.957: INFO: stdout: "2023-03-07T03:29:08.783131763Z I0307 03:29:08.782995 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/6lm 531\n" + Mar 7 03:29:08.957: INFO: got output "2023-03-07T03:29:08.783131763Z I0307 03:29:08.782995 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/6lm 531\n" + STEP: restricting to a time range 03/07/23 03:29:08.957 + Mar 7 03:29:11.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator --since=1s' + Mar 7 03:29:11.620: INFO: stderr: "" + Mar 7 03:29:11.620: INFO: stdout: "I0307 03:29:10.782757 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/h5l8 507\nI0307 03:29:10.982928 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/pvm 590\nI0307 03:29:11.183146 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/dxfm 312\nI0307 03:29:11.383302 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/gtx 252\nI0307 03:29:11.583160 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/45zm 572\n" + Mar 7 03:29:11.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 logs logs-generator logs-generator --since=24h' + Mar 7 03:29:11.746: INFO: stderr: "" + Mar 7 03:29:11.746: INFO: stdout: "I0307 03:29:07.182206 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/n4ff 352\nI0307 03:29:07.382410 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/cpq 200\nI0307 03:29:07.582912 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/l6ts 436\nI0307 03:29:07.783312 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/c7f 304\nI0307 03:29:07.982575 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/xs7t 319\nI0307 03:29:08.182987 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/mbp 501\nI0307 03:29:08.382285 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/f6wc 519\nI0307 03:29:08.582638 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/4wj8 293\nI0307 03:29:08.782995 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/6lm 531\nI0307 03:29:08.982317 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/t886 563\nI0307 03:29:09.182779 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/95b 586\nI0307 03:29:09.383120 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/2dk 477\nI0307 03:29:09.582352 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/f4h 457\nI0307 03:29:09.782757 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/pvh9 483\nI0307 03:29:09.983148 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/lsw 418\nI0307 03:29:10.182324 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/px4x 336\nI0307 03:29:10.382755 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/vjd 235\nI0307 03:29:10.582556 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/s69 473\nI0307 03:29:10.782757 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/h5l8 507\nI0307 03:29:10.982928 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/pvm 590\nI0307 03:29:11.183146 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/dxfm 312\nI0307 03:29:11.383302 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/gtx 252\nI0307 03:29:11.583160 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/45zm 572\n" + [AfterEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1575 + Mar 7 03:29:11.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7023 delete pod logs-generator' + Mar 7 03:29:12.920: INFO: stderr: "" + Mar 7 03:29:12.920: INFO: stdout: "pod \"logs-generator\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:29:12.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-7023" for this suite. 03/07/23 03:29:12.924 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 +[BeforeEach] [sig-network] Ingress API + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:29:12.93 +Mar 7 03:29:12.930: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename ingress 03/07/23 03:29:12.93 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:12.947 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:12.95 +[It] should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 +STEP: getting /apis 03/07/23 03:29:12.953 +STEP: getting /apis/networking.k8s.io 03/07/23 03:29:12.954 +STEP: getting /apis/networking.k8s.iov1 03/07/23 03:29:12.955 +STEP: creating 03/07/23 03:29:12.955 +STEP: getting 03/07/23 03:29:12.977 +STEP: listing 03/07/23 03:29:12.984 +STEP: watching 03/07/23 03:29:12.988 +Mar 7 03:29:12.988: INFO: starting watch +STEP: cluster-wide listing 03/07/23 03:29:12.989 +STEP: cluster-wide watching 03/07/23 03:29:12.993 +Mar 7 03:29:12.993: INFO: starting watch +STEP: patching 03/07/23 03:29:12.994 +STEP: updating 03/07/23 03:29:13.002 +Mar 7 03:29:13.015: INFO: waiting for watch events with expected annotations +Mar 7 03:29:13.015: INFO: saw patched and updated annotations +STEP: patching /status 03/07/23 03:29:13.015 +STEP: updating /status 03/07/23 03:29:13.025 +STEP: get /status 03/07/23 03:29:13.035 +STEP: deleting 03/07/23 03:29:13.042 +STEP: deleting a collection 03/07/23 03:29:13.053 +[AfterEach] [sig-network] Ingress API + test/e2e/framework/framework.go:187 +Mar 7 03:29:13.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-9167" for this suite. 03/07/23 03:29:13.081 +{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","completed":147,"skipped":2438,"failed":0} +------------------------------ +• [0.206 seconds] +[sig-network] Ingress API +test/e2e/network/common/framework.go:23 + should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Ingress API + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:29:12.93 + Mar 7 03:29:12.930: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename ingress 03/07/23 03:29:12.93 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:12.947 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:12.95 + [It] should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 + STEP: getting /apis 03/07/23 03:29:12.953 + STEP: getting /apis/networking.k8s.io 03/07/23 03:29:12.954 + STEP: getting /apis/networking.k8s.iov1 03/07/23 03:29:12.955 + STEP: creating 03/07/23 03:29:12.955 + STEP: getting 03/07/23 03:29:12.977 + STEP: listing 03/07/23 03:29:12.984 + STEP: watching 03/07/23 03:29:12.988 + Mar 7 03:29:12.988: INFO: starting watch + STEP: cluster-wide listing 03/07/23 03:29:12.989 + STEP: cluster-wide watching 03/07/23 03:29:12.993 + Mar 7 03:29:12.993: INFO: starting watch + STEP: patching 03/07/23 03:29:12.994 + STEP: updating 03/07/23 03:29:13.002 + Mar 7 03:29:13.015: INFO: waiting for watch events with expected annotations + Mar 7 03:29:13.015: INFO: saw patched and updated annotations + STEP: patching /status 03/07/23 03:29:13.015 + STEP: updating /status 03/07/23 03:29:13.025 + STEP: get /status 03/07/23 03:29:13.035 + STEP: deleting 03/07/23 03:29:13.042 + STEP: deleting a collection 03/07/23 03:29:13.053 + [AfterEach] [sig-network] Ingress API + test/e2e/framework/framework.go:187 + Mar 7 03:29:13.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "ingress-9167" for this suite. 03/07/23 03:29:13.081 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:206 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:29:13.136 +Mar 7 03:29:13.136: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:29:13.137 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:13.156 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:13.158 +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:206 +STEP: Creating a pod to test emptydir 0666 on node default medium 03/07/23 03:29:13.16 +Mar 7 03:29:13.169: INFO: Waiting up to 5m0s for pod "pod-37ef3724-d660-4a06-a63a-ebce28285935" in namespace "emptydir-5289" to be "Succeeded or Failed" +Mar 7 03:29:13.173: INFO: Pod "pod-37ef3724-d660-4a06-a63a-ebce28285935": Phase="Pending", Reason="", readiness=false. Elapsed: 3.917055ms +Mar 7 03:29:15.178: INFO: Pod "pod-37ef3724-d660-4a06-a63a-ebce28285935": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008287407s +Mar 7 03:29:17.177: INFO: Pod "pod-37ef3724-d660-4a06-a63a-ebce28285935": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008154303s +STEP: Saw pod success 03/07/23 03:29:17.177 +Mar 7 03:29:17.178: INFO: Pod "pod-37ef3724-d660-4a06-a63a-ebce28285935" satisfied condition "Succeeded or Failed" +Mar 7 03:29:17.180: INFO: Trying to get logs from node node-2 pod pod-37ef3724-d660-4a06-a63a-ebce28285935 container test-container: +STEP: delete the pod 03/07/23 03:29:17.186 +Mar 7 03:29:17.195: INFO: Waiting for pod pod-37ef3724-d660-4a06-a63a-ebce28285935 to disappear +Mar 7 03:29:17.197: INFO: Pod pod-37ef3724-d660-4a06-a63a-ebce28285935 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:29:17.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5289" for this suite. 03/07/23 03:29:17.2 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":148,"skipped":2441,"failed":0} +------------------------------ +• [4.069 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:206 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:29:13.136 + Mar 7 03:29:13.136: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:29:13.137 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:13.156 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:13.158 + [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:206 + STEP: Creating a pod to test emptydir 0666 on node default medium 03/07/23 03:29:13.16 + Mar 7 03:29:13.169: INFO: Waiting up to 5m0s for pod "pod-37ef3724-d660-4a06-a63a-ebce28285935" in namespace "emptydir-5289" to be "Succeeded or Failed" + Mar 7 03:29:13.173: INFO: Pod "pod-37ef3724-d660-4a06-a63a-ebce28285935": Phase="Pending", Reason="", readiness=false. Elapsed: 3.917055ms + Mar 7 03:29:15.178: INFO: Pod "pod-37ef3724-d660-4a06-a63a-ebce28285935": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008287407s + Mar 7 03:29:17.177: INFO: Pod "pod-37ef3724-d660-4a06-a63a-ebce28285935": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008154303s + STEP: Saw pod success 03/07/23 03:29:17.177 + Mar 7 03:29:17.178: INFO: Pod "pod-37ef3724-d660-4a06-a63a-ebce28285935" satisfied condition "Succeeded or Failed" + Mar 7 03:29:17.180: INFO: Trying to get logs from node node-2 pod pod-37ef3724-d660-4a06-a63a-ebce28285935 container test-container: + STEP: delete the pod 03/07/23 03:29:17.186 + Mar 7 03:29:17.195: INFO: Waiting for pod pod-37ef3724-d660-4a06-a63a-ebce28285935 to disappear + Mar 7 03:29:17.197: INFO: Pod pod-37ef3724-d660-4a06-a63a-ebce28285935 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:29:17.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-5289" for this suite. 03/07/23 03:29:17.2 + << End Captured GinkgoWriter Output +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:140 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:29:17.205 +Mar 7 03:29:17.205: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename disruption 03/07/23 03:29:17.206 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:17.221 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:17.223 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 +[It] should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:140 +STEP: Waiting for the pdb to be processed 03/07/23 03:29:17.229 +STEP: Waiting for all pods to be running 03/07/23 03:29:19.255 +Mar 7 03:29:19.258: INFO: running pods: 0 < 3 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 +Mar 7 03:29:21.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2850" for this suite. 03/07/23 03:29:21.27 +{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","completed":149,"skipped":2441,"failed":0} +------------------------------ +• [4.070 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:140 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:29:17.205 + Mar 7 03:29:17.205: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename disruption 03/07/23 03:29:17.206 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:17.221 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:17.223 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 + [It] should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:140 + STEP: Waiting for the pdb to be processed 03/07/23 03:29:17.229 + STEP: Waiting for all pods to be running 03/07/23 03:29:19.255 + Mar 7 03:29:19.258: INFO: running pods: 0 < 3 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 + Mar 7 03:29:21.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "disruption-2850" for this suite. 03/07/23 03:29:21.27 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:104 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:29:21.275 +Mar 7 03:29:21.275: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-probe 03/07/23 03:29:21.276 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:21.289 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:21.291 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:104 +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +Mar 7 03:30:21.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9833" for this suite. 03/07/23 03:30:21.305 +{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","completed":150,"skipped":2461,"failed":0} +------------------------------ +• [SLOW TEST] [60.035 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:104 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:29:21.275 + Mar 7 03:29:21.275: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-probe 03/07/23 03:29:21.276 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:29:21.289 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:29:21.291 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 + [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:104 + [AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 + Mar 7 03:30:21.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-probe-9833" for this suite. 03/07/23 03:30:21.305 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:66 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:30:21.311 +Mar 7 03:30:21.311: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replication-controller 03/07/23 03:30:21.312 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:21.332 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:21.335 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 +[It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:66 +STEP: Creating replication controller my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3 03/07/23 03:30:21.336 +Mar 7 03:30:21.342: INFO: Pod name my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3: Found 0 pods out of 1 +Mar 7 03:30:26.347: INFO: Pod name my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3: Found 1 pods out of 1 +Mar 7 03:30:26.347: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3" are running +Mar 7 03:30:26.347: INFO: Waiting up to 5m0s for pod "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk" in namespace "replication-controller-2397" to be "running" +Mar 7 03:30:26.349: INFO: Pod "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk": Phase="Running", Reason="", readiness=true. Elapsed: 2.43442ms +Mar 7 03:30:26.349: INFO: Pod "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk" satisfied condition "running" +Mar 7 03:30:26.349: INFO: Pod "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:30:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:30:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:30:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:30:21 +0000 UTC Reason: Message:}]) +Mar 7 03:30:26.349: INFO: Trying to dial the pod +Mar 7 03:30:31.362: INFO: Controller my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3: Got expected result from replica 1 [my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk]: "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 +Mar 7 03:30:31.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-2397" for this suite. 03/07/23 03:30:31.366 +{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","completed":151,"skipped":2474,"failed":0} +------------------------------ +• [SLOW TEST] [10.060 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:66 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:30:21.311 + Mar 7 03:30:21.311: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replication-controller 03/07/23 03:30:21.312 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:21.332 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:21.335 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 + [It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:66 + STEP: Creating replication controller my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3 03/07/23 03:30:21.336 + Mar 7 03:30:21.342: INFO: Pod name my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3: Found 0 pods out of 1 + Mar 7 03:30:26.347: INFO: Pod name my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3: Found 1 pods out of 1 + Mar 7 03:30:26.347: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3" are running + Mar 7 03:30:26.347: INFO: Waiting up to 5m0s for pod "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk" in namespace "replication-controller-2397" to be "running" + Mar 7 03:30:26.349: INFO: Pod "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk": Phase="Running", Reason="", readiness=true. Elapsed: 2.43442ms + Mar 7 03:30:26.349: INFO: Pod "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk" satisfied condition "running" + Mar 7 03:30:26.349: INFO: Pod "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:30:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:30:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:30:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:30:21 +0000 UTC Reason: Message:}]) + Mar 7 03:30:26.349: INFO: Trying to dial the pod + Mar 7 03:30:31.362: INFO: Controller my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3: Got expected result from replica 1 [my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk]: "my-hostname-basic-86380074-efb3-484a-88d4-efa5e29392d3-9h2xk", 1 of 1 required successes so far + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 + Mar 7 03:30:31.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replication-controller-2397" for this suite. 03/07/23 03:30:31.366 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:30:31.372 +Mar 7 03:30:31.372: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename watch 03/07/23 03:30:31.372 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:31.392 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:31.405 +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 +STEP: creating a watch on configmaps with a certain label 03/07/23 03:30:31.407 +STEP: creating a new configmap 03/07/23 03:30:31.408 +STEP: modifying the configmap once 03/07/23 03:30:31.412 +STEP: changing the label value of the configmap 03/07/23 03:30:31.419 +STEP: Expecting to observe a delete notification for the watched object 03/07/23 03:30:31.425 +Mar 7 03:30:31.425: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58236 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:30:31.425: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58237 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:30:31.426: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58238 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time 03/07/23 03:30:31.426 +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements 03/07/23 03:30:31.432 +STEP: changing the label value of the configmap back 03/07/23 03:30:41.432 +STEP: modifying the configmap a third time 03/07/23 03:30:41.439 +STEP: deleting the configmap 03/07/23 03:30:41.444 +STEP: Expecting to observe an add notification for the watched object when the label value was restored 03/07/23 03:30:41.448 +Mar 7 03:30:41.448: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58295 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:30:41.448: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58296 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:30:41.448: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58297 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 +Mar 7 03:30:41.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-4256" for this suite. 03/07/23 03:30:41.451 +{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","completed":152,"skipped":2487,"failed":0} +------------------------------ +• [SLOW TEST] [10.084 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:30:31.372 + Mar 7 03:30:31.372: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename watch 03/07/23 03:30:31.372 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:31.392 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:31.405 + [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 + STEP: creating a watch on configmaps with a certain label 03/07/23 03:30:31.407 + STEP: creating a new configmap 03/07/23 03:30:31.408 + STEP: modifying the configmap once 03/07/23 03:30:31.412 + STEP: changing the label value of the configmap 03/07/23 03:30:31.419 + STEP: Expecting to observe a delete notification for the watched object 03/07/23 03:30:31.425 + Mar 7 03:30:31.425: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58236 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:30:31.425: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58237 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:30:31.426: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58238 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying the configmap a second time 03/07/23 03:30:31.426 + STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements 03/07/23 03:30:31.432 + STEP: changing the label value of the configmap back 03/07/23 03:30:41.432 + STEP: modifying the configmap a third time 03/07/23 03:30:41.439 + STEP: deleting the configmap 03/07/23 03:30:41.444 + STEP: Expecting to observe an add notification for the watched object when the label value was restored 03/07/23 03:30:41.448 + Mar 7 03:30:41.448: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58295 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:30:41.448: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58296 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:30:41.448: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4256 c73427aa-146c-4ff2-abcd-ce20e5235836 58297 0 2023-03-07 03:30:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-07 03:30:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 + Mar 7 03:30:41.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "watch-4256" for this suite. 03/07/23 03:30:41.451 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:30:41.456 +Mar 7 03:30:41.456: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename deployment 03/07/23 03:30:41.457 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:41.489 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:41.491 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 +Mar 7 03:30:41.493: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Mar 7 03:30:41.508: INFO: Pod name sample-pod: Found 0 pods out of 1 +Mar 7 03:30:46.512: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 03/07/23 03:30:46.512 +Mar 7 03:30:46.512: INFO: Creating deployment "test-rolling-update-deployment" +Mar 7 03:30:46.516: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Mar 7 03:30:46.520: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Mar 7 03:30:48.527: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Mar 7 03:30:48.529: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Mar 7 03:30:48.535: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2438 989ed5bd-973c-4d9b-810b-13deb23ce74e 58374 1 2023-03-07 03:30:46 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-03-07 03:30:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:30:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007eaf058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-03-07 03:30:46 +0000 UTC,LastTransitionTime:2023-03-07 03:30:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-78f575d8ff" has successfully progressed.,LastUpdateTime:2023-03-07 03:30:48 +0000 UTC,LastTransitionTime:2023-03-07 03:30:46 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Mar 7 03:30:48.538: INFO: New ReplicaSet "test-rolling-update-deployment-78f575d8ff" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-78f575d8ff deployment-2438 f83869e1-2146-4699-8735-c6792143f7a1 58364 1 2023-03-07 03:30:46 +0000 UTC map[name:sample-pod pod-template-hash:78f575d8ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 989ed5bd-973c-4d9b-810b-13deb23ce74e 0xc006171e77 0xc006171e78}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:30:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"989ed5bd-973c-4d9b-810b-13deb23ce74e\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:30:48 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 78f575d8ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:78f575d8ff] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006171f38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Mar 7 03:30:48.538: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Mar 7 03:30:48.538: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2438 ecc477c9-a634-4cd1-92bb-1a7f2d140599 58373 2 2023-03-07 03:30:41 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 989ed5bd-973c-4d9b-810b-13deb23ce74e 0xc006171d37 0xc006171d38}] [] [{e2e.test Update apps/v1 2023-03-07 03:30:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:30:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"989ed5bd-973c-4d9b-810b-13deb23ce74e\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:30:48 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006171e08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Mar 7 03:30:48.540: INFO: Pod "test-rolling-update-deployment-78f575d8ff-hcq7b" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-78f575d8ff-hcq7b test-rolling-update-deployment-78f575d8ff- deployment-2438 209b5760-fcfa-427a-bc09-0e72d42c5ad1 58363 0 2023-03-07 03:30:46 +0000 UTC map[name:sample-pod pod-template-hash:78f575d8ff] map[cni.projectcalico.org/containerID:8ba00727a05a58bb9011d42c590326e309409ccfb7941397cb6f0b6d6a9d91df cni.projectcalico.org/podIP:10.233.247.55/32 cni.projectcalico.org/podIPs:10.233.247.55/32] [{apps/v1 ReplicaSet test-rolling-update-deployment-78f575d8ff f83869e1-2146-4699-8735-c6792143f7a1 0xc007eaf427 0xc007eaf428}] [] [{kube-controller-manager Update v1 2023-03-07 03:30:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f83869e1-2146-4699-8735-c6792143f7a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:30:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:30:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4smzs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.40,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4smzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:30:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:30:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:30:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:30:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.55,StartTime:2023-03-07 03:30:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:30:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.40,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146,ContainerID:containerd://4ecdb6f47ee8b725aaa2f23164b3b031d8ede68ec341e9aa3b1c5fb22af079f5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +Mar 7 03:30:48.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2438" for this suite. 03/07/23 03:30:48.544 +{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","completed":153,"skipped":2498,"failed":0} +------------------------------ +• [SLOW TEST] [7.092 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:30:41.456 + Mar 7 03:30:41.456: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename deployment 03/07/23 03:30:41.457 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:41.489 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:41.491 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 + Mar 7 03:30:41.493: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) + Mar 7 03:30:41.508: INFO: Pod name sample-pod: Found 0 pods out of 1 + Mar 7 03:30:46.512: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 03/07/23 03:30:46.512 + Mar 7 03:30:46.512: INFO: Creating deployment "test-rolling-update-deployment" + Mar 7 03:30:46.516: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has + Mar 7 03:30:46.520: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created + Mar 7 03:30:48.527: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected + Mar 7 03:30:48.529: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Mar 7 03:30:48.535: INFO: Deployment "test-rolling-update-deployment": + &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2438 989ed5bd-973c-4d9b-810b-13deb23ce74e 58374 1 2023-03-07 03:30:46 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-03-07 03:30:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:30:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007eaf058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-03-07 03:30:46 +0000 UTC,LastTransitionTime:2023-03-07 03:30:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-78f575d8ff" has successfully progressed.,LastUpdateTime:2023-03-07 03:30:48 +0000 UTC,LastTransitionTime:2023-03-07 03:30:46 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Mar 7 03:30:48.538: INFO: New ReplicaSet "test-rolling-update-deployment-78f575d8ff" of Deployment "test-rolling-update-deployment": + &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-78f575d8ff deployment-2438 f83869e1-2146-4699-8735-c6792143f7a1 58364 1 2023-03-07 03:30:46 +0000 UTC map[name:sample-pod pod-template-hash:78f575d8ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 989ed5bd-973c-4d9b-810b-13deb23ce74e 0xc006171e77 0xc006171e78}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:30:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"989ed5bd-973c-4d9b-810b-13deb23ce74e\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:30:48 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 78f575d8ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:78f575d8ff] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006171f38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Mar 7 03:30:48.538: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": + Mar 7 03:30:48.538: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2438 ecc477c9-a634-4cd1-92bb-1a7f2d140599 58373 2 2023-03-07 03:30:41 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 989ed5bd-973c-4d9b-810b-13deb23ce74e 0xc006171d37 0xc006171d38}] [] [{e2e.test Update apps/v1 2023-03-07 03:30:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:30:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"989ed5bd-973c-4d9b-810b-13deb23ce74e\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:30:48 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006171e08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Mar 7 03:30:48.540: INFO: Pod "test-rolling-update-deployment-78f575d8ff-hcq7b" is available: + &Pod{ObjectMeta:{test-rolling-update-deployment-78f575d8ff-hcq7b test-rolling-update-deployment-78f575d8ff- deployment-2438 209b5760-fcfa-427a-bc09-0e72d42c5ad1 58363 0 2023-03-07 03:30:46 +0000 UTC map[name:sample-pod pod-template-hash:78f575d8ff] map[cni.projectcalico.org/containerID:8ba00727a05a58bb9011d42c590326e309409ccfb7941397cb6f0b6d6a9d91df cni.projectcalico.org/podIP:10.233.247.55/32 cni.projectcalico.org/podIPs:10.233.247.55/32] [{apps/v1 ReplicaSet test-rolling-update-deployment-78f575d8ff f83869e1-2146-4699-8735-c6792143f7a1 0xc007eaf427 0xc007eaf428}] [] [{kube-controller-manager Update v1 2023-03-07 03:30:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f83869e1-2146-4699-8735-c6792143f7a1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:30:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:30:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4smzs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.40,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4smzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:30:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:30:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:30:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:30:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.55,StartTime:2023-03-07 03:30:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:30:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.40,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146,ContainerID:containerd://4ecdb6f47ee8b725aaa2f23164b3b031d8ede68ec341e9aa3b1c5fb22af079f5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 + Mar 7 03:30:48.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "deployment-2438" for this suite. 03/07/23 03:30:48.544 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:174 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:30:48.551 +Mar 7 03:30:48.551: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:30:48.552 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:48.565 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:48.568 +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:174 +STEP: Creating configMap with name configmap-test-upd-e947ae81-d72f-4a58-a292-45c2d00da7c4 03/07/23 03:30:48.573 +STEP: Creating the pod 03/07/23 03:30:48.577 +Mar 7 03:30:48.583: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c5f1099-898f-479c-8a3b-13bae8651d92" in namespace "configmap-3163" to be "running" +Mar 7 03:30:48.587: INFO: Pod "pod-configmaps-0c5f1099-898f-479c-8a3b-13bae8651d92": Phase="Pending", Reason="", readiness=false. Elapsed: 3.262687ms +Mar 7 03:30:50.590: INFO: Pod "pod-configmaps-0c5f1099-898f-479c-8a3b-13bae8651d92": Phase="Running", Reason="", readiness=false. Elapsed: 2.006560789s +Mar 7 03:30:50.590: INFO: Pod "pod-configmaps-0c5f1099-898f-479c-8a3b-13bae8651d92" satisfied condition "running" +STEP: Waiting for pod with text data 03/07/23 03:30:50.59 +STEP: Waiting for pod with binary data 03/07/23 03:30:50.602 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:30:50.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3163" for this suite. 03/07/23 03:30:50.612 +{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","completed":154,"skipped":2542,"failed":0} +------------------------------ +• [2.066 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:174 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:30:48.551 + Mar 7 03:30:48.551: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:30:48.552 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:48.565 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:48.568 + [It] binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:174 + STEP: Creating configMap with name configmap-test-upd-e947ae81-d72f-4a58-a292-45c2d00da7c4 03/07/23 03:30:48.573 + STEP: Creating the pod 03/07/23 03:30:48.577 + Mar 7 03:30:48.583: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c5f1099-898f-479c-8a3b-13bae8651d92" in namespace "configmap-3163" to be "running" + Mar 7 03:30:48.587: INFO: Pod "pod-configmaps-0c5f1099-898f-479c-8a3b-13bae8651d92": Phase="Pending", Reason="", readiness=false. Elapsed: 3.262687ms + Mar 7 03:30:50.590: INFO: Pod "pod-configmaps-0c5f1099-898f-479c-8a3b-13bae8651d92": Phase="Running", Reason="", readiness=false. Elapsed: 2.006560789s + Mar 7 03:30:50.590: INFO: Pod "pod-configmaps-0c5f1099-898f-479c-8a3b-13bae8651d92" satisfied condition "running" + STEP: Waiting for pod with text data 03/07/23 03:30:50.59 + STEP: Waiting for pod with binary data 03/07/23 03:30:50.602 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:30:50.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-3163" for this suite. 03/07/23 03:30:50.612 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1785 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:30:50.617 +Mar 7 03:30:50.617: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:30:50.618 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:50.63 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:50.633 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1785 +STEP: starting the proxy server 03/07/23 03:30:50.637 +Mar 7 03:30:50.637: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-584 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output 03/07/23 03:30:50.675 +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:30:50.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-584" for this suite. 03/07/23 03:30:50.688 +{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","completed":155,"skipped":2543,"failed":0} +------------------------------ +• [0.079 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Proxy server + test/e2e/kubectl/kubectl.go:1778 + should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1785 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:30:50.617 + Mar 7 03:30:50.617: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:30:50.618 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:50.63 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:50.633 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1785 + STEP: starting the proxy server 03/07/23 03:30:50.637 + Mar 7 03:30:50.637: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-584 proxy -p 0 --disable-filter' + STEP: curling proxy /api/ output 03/07/23 03:30:50.675 + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:30:50.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-584" for this suite. 03/07/23 03:30:50.688 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:646 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:30:50.698 +Mar 7 03:30:50.698: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename svcaccounts 03/07/23 03:30:50.698 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:50.711 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:50.713 +[It] should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:646 +STEP: creating a ServiceAccount 03/07/23 03:30:50.715 +STEP: watching for the ServiceAccount to be added 03/07/23 03:30:50.72 +STEP: patching the ServiceAccount 03/07/23 03:30:50.721 +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) 03/07/23 03:30:50.725 +STEP: deleting the ServiceAccount 03/07/23 03:30:50.728 +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +Mar 7 03:30:50.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-4616" for this suite. 03/07/23 03:30:50.74 +{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","completed":156,"skipped":2584,"failed":0} +------------------------------ +• [0.047 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:646 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:30:50.698 + Mar 7 03:30:50.698: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename svcaccounts 03/07/23 03:30:50.698 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:50.711 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:50.713 + [It] should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:646 + STEP: creating a ServiceAccount 03/07/23 03:30:50.715 + STEP: watching for the ServiceAccount to be added 03/07/23 03:30:50.72 + STEP: patching the ServiceAccount 03/07/23 03:30:50.721 + STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) 03/07/23 03:30:50.725 + STEP: deleting the ServiceAccount 03/07/23 03:30:50.728 + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 + Mar 7 03:30:50.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "svcaccounts-4616" for this suite. 03/07/23 03:30:50.74 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:94 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:30:50.745 +Mar 7 03:30:50.745: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:30:50.746 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:50.757 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:50.76 +[It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:94 +STEP: creating secret secrets-7002/secret-test-c52eb3dd-8c1e-426b-a179-094ddbdd88bc 03/07/23 03:30:50.761 +STEP: Creating a pod to test consume secrets 03/07/23 03:30:50.764 +Mar 7 03:30:50.770: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c" in namespace "secrets-7002" to be "Succeeded or Failed" +Mar 7 03:30:50.772: INFO: Pod "pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.997099ms +Mar 7 03:30:52.775: INFO: Pod "pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00493211s +Mar 7 03:30:54.775: INFO: Pod "pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.004883958s +STEP: Saw pod success 03/07/23 03:30:54.775 +Mar 7 03:30:54.775: INFO: Pod "pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c" satisfied condition "Succeeded or Failed" +Mar 7 03:30:54.777: INFO: Trying to get logs from node node-2 pod pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c container env-test: +STEP: delete the pod 03/07/23 03:30:54.783 +Mar 7 03:30:54.791: INFO: Waiting for pod pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c to disappear +Mar 7 03:30:54.793: INFO: Pod pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c no longer exists +[AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:30:54.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7002" for this suite. 03/07/23 03:30:54.796 +{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","completed":157,"skipped":2584,"failed":0} +------------------------------ +• [4.056 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:94 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:30:50.745 + Mar 7 03:30:50.745: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:30:50.746 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:50.757 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:50.76 + [It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:94 + STEP: creating secret secrets-7002/secret-test-c52eb3dd-8c1e-426b-a179-094ddbdd88bc 03/07/23 03:30:50.761 + STEP: Creating a pod to test consume secrets 03/07/23 03:30:50.764 + Mar 7 03:30:50.770: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c" in namespace "secrets-7002" to be "Succeeded or Failed" + Mar 7 03:30:50.772: INFO: Pod "pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.997099ms + Mar 7 03:30:52.775: INFO: Pod "pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00493211s + Mar 7 03:30:54.775: INFO: Pod "pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.004883958s + STEP: Saw pod success 03/07/23 03:30:54.775 + Mar 7 03:30:54.775: INFO: Pod "pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c" satisfied condition "Succeeded or Failed" + Mar 7 03:30:54.777: INFO: Trying to get logs from node node-2 pod pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c container env-test: + STEP: delete the pod 03/07/23 03:30:54.783 + Mar 7 03:30:54.791: INFO: Waiting for pod pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c to disappear + Mar 7 03:30:54.793: INFO: Pod pod-configmaps-f4b50e39-1c78-4e6f-a651-78c1a97e173c no longer exists + [AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:30:54.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-7002" for this suite. 03/07/23 03:30:54.796 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:30:54.802 +Mar 7 03:30:54.803: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename subpath 03/07/23 03:30:54.803 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:54.816 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:54.818 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 03/07/23 03:30:54.82 +[It] should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 +STEP: Creating pod pod-subpath-test-projected-vpfd 03/07/23 03:30:54.827 +STEP: Creating a pod to test atomic-volume-subpath 03/07/23 03:30:54.827 +Mar 7 03:30:54.834: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vpfd" in namespace "subpath-8592" to be "Succeeded or Failed" +Mar 7 03:30:54.839: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.953482ms +Mar 7 03:30:56.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 2.008009761s +Mar 7 03:30:58.843: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 4.008801581s +Mar 7 03:31:00.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 6.007612581s +Mar 7 03:31:02.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 8.007961761s +Mar 7 03:31:04.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 10.007976186s +Mar 7 03:31:06.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 12.00799903s +Mar 7 03:31:08.844: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 14.009447139s +Mar 7 03:31:10.844: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 16.009318413s +Mar 7 03:31:12.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 18.007841119s +Mar 7 03:31:14.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 20.007708534s +Mar 7 03:31:16.844: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=false. Elapsed: 22.009242053s +Mar 7 03:31:18.844: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009283581s +STEP: Saw pod success 03/07/23 03:31:18.844 +Mar 7 03:31:18.844: INFO: Pod "pod-subpath-test-projected-vpfd" satisfied condition "Succeeded or Failed" +Mar 7 03:31:18.846: INFO: Trying to get logs from node node-2 pod pod-subpath-test-projected-vpfd container test-container-subpath-projected-vpfd: +STEP: delete the pod 03/07/23 03:31:18.852 +Mar 7 03:31:18.861: INFO: Waiting for pod pod-subpath-test-projected-vpfd to disappear +Mar 7 03:31:18.863: INFO: Pod pod-subpath-test-projected-vpfd no longer exists +STEP: Deleting pod pod-subpath-test-projected-vpfd 03/07/23 03:31:18.863 +Mar 7 03:31:18.863: INFO: Deleting pod "pod-subpath-test-projected-vpfd" in namespace "subpath-8592" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 +Mar 7 03:31:18.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-8592" for this suite. 03/07/23 03:31:18.868 +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]","completed":158,"skipped":2616,"failed":0} +------------------------------ +• [SLOW TEST] [24.070 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:30:54.802 + Mar 7 03:30:54.803: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename subpath 03/07/23 03:30:54.803 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:30:54.816 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:30:54.818 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 03/07/23 03:30:54.82 + [It] should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 + STEP: Creating pod pod-subpath-test-projected-vpfd 03/07/23 03:30:54.827 + STEP: Creating a pod to test atomic-volume-subpath 03/07/23 03:30:54.827 + Mar 7 03:30:54.834: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vpfd" in namespace "subpath-8592" to be "Succeeded or Failed" + Mar 7 03:30:54.839: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.953482ms + Mar 7 03:30:56.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 2.008009761s + Mar 7 03:30:58.843: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 4.008801581s + Mar 7 03:31:00.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 6.007612581s + Mar 7 03:31:02.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 8.007961761s + Mar 7 03:31:04.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 10.007976186s + Mar 7 03:31:06.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 12.00799903s + Mar 7 03:31:08.844: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 14.009447139s + Mar 7 03:31:10.844: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 16.009318413s + Mar 7 03:31:12.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 18.007841119s + Mar 7 03:31:14.842: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=true. Elapsed: 20.007708534s + Mar 7 03:31:16.844: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Running", Reason="", readiness=false. Elapsed: 22.009242053s + Mar 7 03:31:18.844: INFO: Pod "pod-subpath-test-projected-vpfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009283581s + STEP: Saw pod success 03/07/23 03:31:18.844 + Mar 7 03:31:18.844: INFO: Pod "pod-subpath-test-projected-vpfd" satisfied condition "Succeeded or Failed" + Mar 7 03:31:18.846: INFO: Trying to get logs from node node-2 pod pod-subpath-test-projected-vpfd container test-container-subpath-projected-vpfd: + STEP: delete the pod 03/07/23 03:31:18.852 + Mar 7 03:31:18.861: INFO: Waiting for pod pod-subpath-test-projected-vpfd to disappear + Mar 7 03:31:18.863: INFO: Pod pod-subpath-test-projected-vpfd no longer exists + STEP: Deleting pod pod-subpath-test-projected-vpfd 03/07/23 03:31:18.863 + Mar 7 03:31:18.863: INFO: Deleting pod "pod-subpath-test-projected-vpfd" in namespace "subpath-8592" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 + Mar 7 03:31:18.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "subpath-8592" for this suite. 03/07/23 03:31:18.868 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:822 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:31:18.873 +Mar 7 03:31:18.873: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename daemonsets 03/07/23 03:31:18.874 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:31:18.888 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:31:18.89 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:822 +STEP: Creating simple DaemonSet "daemon-set" 03/07/23 03:31:18.906 +STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 03:31:18.909 +Mar 7 03:31:18.915: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:31:18.915: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 03:31:19.922: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Mar 7 03:31:19.922: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 03:31:20.922: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Mar 7 03:31:20.922: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: listing all DeamonSets 03/07/23 03:31:20.924 +STEP: DeleteCollection of the DaemonSets 03/07/23 03:31:20.927 +STEP: Verify that ReplicaSets have been deleted 03/07/23 03:31:20.932 +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +Mar 7 03:31:20.939: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"58688"},"items":null} + +Mar 7 03:31:20.946: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"58688"},"items":[{"metadata":{"name":"daemon-set-fxfh5","generateName":"daemon-set-","namespace":"daemonsets-3563","uid":"5daf3846-3428-48d0-81af-ca665e4dd6ca","resourceVersion":"58677","creationTimestamp":"2023-03-07T03:31:18Z","labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"ea4217f70bc96721b29addc29cce94dd93857c6a529da29eb13db4b9bbfe5e18","cni.projectcalico.org/podIP":"10.233.84.142/32","cni.projectcalico.org/podIPs":"10.233.84.142/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"fd6f4027-20bb-4772-8d0f-8a61692166a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd6f4027-20bb-4772-8d0f-8a61692166a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.142\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-625jh","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-625jh","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node-1","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["node-1"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:19Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:19Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"}],"hostIP":"192.168.1.101","podIP":"10.233.84.142","podIPs":[{"ip":"10.233.84.142"}],"startTime":"2023-03-07T03:31:18Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-03-07T03:31:19Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://41c459c1c1379d125ccbc2948db7da072b4addc5bbd3ffe69cf2edb9fb17be73","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-gmdvn","generateName":"daemon-set-","namespace":"daemonsets-3563","uid":"91bb387f-159a-4d18-a0d8-d7e8f75a5e19","resourceVersion":"58680","creationTimestamp":"2023-03-07T03:31:18Z","labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"d8248b3c99c386c0885611e7709a910d25dc55f77830d8c199cc140ecaddfbea","cni.projectcalico.org/podIP":"10.233.247.30/32","cni.projectcalico.org/podIPs":"10.233.247.30/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"fd6f4027-20bb-4772-8d0f-8a61692166a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd6f4027-20bb-4772-8d0f-8a61692166a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.30\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-627wf","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-627wf","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node-2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["node-2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:20Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:20Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"}],"hostIP":"192.168.1.102","podIP":"10.233.247.30","podIPs":[{"ip":"10.233.247.30"}],"startTime":"2023-03-07T03:31:18Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-03-07T03:31:19Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://8e78c6d8da0ba0c8fff4ec8ed29ecdefadd441bd74f2d7568397fa79d713d960","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-pt78s","generateName":"daemon-set-","namespace":"daemonsets-3563","uid":"2b909bfb-1023-41be-b32e-b9a1b4e3265d","resourceVersion":"58682","creationTimestamp":"2023-03-07T03:31:18Z","labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"43caabb95819c25c335cb8e47b7b49a395978eb0ebffdd6baa369b0b5852704c","cni.projectcalico.org/podIP":"10.233.132.118/32","cni.projectcalico.org/podIPs":"10.233.132.118/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"fd6f4027-20bb-4772-8d0f-8a61692166a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd6f4027-20bb-4772-8d0f-8a61692166a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.132.118\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-tdk2t","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-tdk2t","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"bootstrap","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["bootstrap"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:20Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:20Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"}],"hostIP":"192.168.1.100","podIP":"10.233.132.118","podIPs":[{"ip":"10.233.132.118"}],"startTime":"2023-03-07T03:31:18Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-03-07T03:31:19Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://d4553a5db8a95013e3b9dafa315615de95ce8d66b6c5869b08d55856128a4c07","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:31:20.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-3563" for this suite. 03/07/23 03:31:20.97 +{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","completed":159,"skipped":2640,"failed":0} +------------------------------ +• [2.103 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:822 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:31:18.873 + Mar 7 03:31:18.873: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename daemonsets 03/07/23 03:31:18.874 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:31:18.888 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:31:18.89 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 + [It] should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:822 + STEP: Creating simple DaemonSet "daemon-set" 03/07/23 03:31:18.906 + STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 03:31:18.909 + Mar 7 03:31:18.915: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:31:18.915: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 03:31:19.922: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Mar 7 03:31:19.922: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 03:31:20.922: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Mar 7 03:31:20.922: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: listing all DeamonSets 03/07/23 03:31:20.924 + STEP: DeleteCollection of the DaemonSets 03/07/23 03:31:20.927 + STEP: Verify that ReplicaSets have been deleted 03/07/23 03:31:20.932 + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 + Mar 7 03:31:20.939: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"58688"},"items":null} + + Mar 7 03:31:20.946: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"58688"},"items":[{"metadata":{"name":"daemon-set-fxfh5","generateName":"daemon-set-","namespace":"daemonsets-3563","uid":"5daf3846-3428-48d0-81af-ca665e4dd6ca","resourceVersion":"58677","creationTimestamp":"2023-03-07T03:31:18Z","labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"ea4217f70bc96721b29addc29cce94dd93857c6a529da29eb13db4b9bbfe5e18","cni.projectcalico.org/podIP":"10.233.84.142/32","cni.projectcalico.org/podIPs":"10.233.84.142/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"fd6f4027-20bb-4772-8d0f-8a61692166a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd6f4027-20bb-4772-8d0f-8a61692166a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.142\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-625jh","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-625jh","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node-1","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["node-1"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:19Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:19Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"}],"hostIP":"192.168.1.101","podIP":"10.233.84.142","podIPs":[{"ip":"10.233.84.142"}],"startTime":"2023-03-07T03:31:18Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-03-07T03:31:19Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://41c459c1c1379d125ccbc2948db7da072b4addc5bbd3ffe69cf2edb9fb17be73","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-gmdvn","generateName":"daemon-set-","namespace":"daemonsets-3563","uid":"91bb387f-159a-4d18-a0d8-d7e8f75a5e19","resourceVersion":"58680","creationTimestamp":"2023-03-07T03:31:18Z","labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"d8248b3c99c386c0885611e7709a910d25dc55f77830d8c199cc140ecaddfbea","cni.projectcalico.org/podIP":"10.233.247.30/32","cni.projectcalico.org/podIPs":"10.233.247.30/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"fd6f4027-20bb-4772-8d0f-8a61692166a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd6f4027-20bb-4772-8d0f-8a61692166a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.30\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-627wf","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-627wf","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node-2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["node-2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:20Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:20Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"}],"hostIP":"192.168.1.102","podIP":"10.233.247.30","podIPs":[{"ip":"10.233.247.30"}],"startTime":"2023-03-07T03:31:18Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-03-07T03:31:19Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://8e78c6d8da0ba0c8fff4ec8ed29ecdefadd441bd74f2d7568397fa79d713d960","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-pt78s","generateName":"daemon-set-","namespace":"daemonsets-3563","uid":"2b909bfb-1023-41be-b32e-b9a1b4e3265d","resourceVersion":"58682","creationTimestamp":"2023-03-07T03:31:18Z","labels":{"controller-revision-hash":"7f7ffb4fcc","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"43caabb95819c25c335cb8e47b7b49a395978eb0ebffdd6baa369b0b5852704c","cni.projectcalico.org/podIP":"10.233.132.118/32","cni.projectcalico.org/podIPs":"10.233.132.118/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"fd6f4027-20bb-4772-8d0f-8a61692166a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd6f4027-20bb-4772-8d0f-8a61692166a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-07T03:31:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.132.118\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-tdk2t","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-tdk2t","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"bootstrap","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["bootstrap"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:20Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:20Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-03-07T03:31:18Z"}],"hostIP":"192.168.1.100","podIP":"10.233.132.118","podIPs":[{"ip":"10.233.132.118"}],"startTime":"2023-03-07T03:31:18Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-03-07T03:31:19Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-2","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://d4553a5db8a95013e3b9dafa315615de95ce8d66b6c5869b08d55856128a4c07","started":true}],"qosClass":"BestEffort"}}]} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:31:20.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "daemonsets-3563" for this suite. 03/07/23 03:31:20.97 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2221 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:31:20.978 +Mar 7 03:31:20.978: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:31:20.981 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:31:20.995 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:31:20.998 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2221 +STEP: creating service in namespace services-92 03/07/23 03:31:21.002 +Mar 7 03:31:21.008: INFO: Waiting up to 5m0s for pod "kube-proxy-mode-detector" in namespace "services-92" to be "running and ready" +Mar 7 03:31:21.012: INFO: Pod "kube-proxy-mode-detector": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126218ms +Mar 7 03:31:21.012: INFO: The phase of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:31:23.016: INFO: Pod "kube-proxy-mode-detector": Phase="Running", Reason="", readiness=true. Elapsed: 2.008284054s +Mar 7 03:31:23.016: INFO: The phase of Pod kube-proxy-mode-detector is Running (Ready = true) +Mar 7 03:31:23.016: INFO: Pod "kube-proxy-mode-detector" satisfied condition "running and ready" +Mar 7 03:31:23.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Mar 7 03:31:23.210: INFO: rc: 7 +Mar 7 03:31:23.220: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Mar 7 03:31:23.228: INFO: Pod kube-proxy-mode-detector no longer exists +Mar 7 03:31:23.228: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode: +Command stdout: + +stderr: ++ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode +command terminated with exit code 7 + +error: +exit status 7 +STEP: creating service affinity-nodeport-timeout in namespace services-92 03/07/23 03:31:23.228 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-92 03/07/23 03:31:23.251 +I0307 03:31:23.262801 22 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-92, replica count: 3 +I0307 03:31:26.313707 22 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 03:31:26.321: INFO: Creating new exec pod +Mar 7 03:31:26.326: INFO: Waiting up to 5m0s for pod "execpod-affinity2xm9c" in namespace "services-92" to be "running" +Mar 7 03:31:26.331: INFO: Pod "execpod-affinity2xm9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370852ms +Mar 7 03:31:28.334: INFO: Pod "execpod-affinity2xm9c": Phase="Running", Reason="", readiness=true. Elapsed: 2.008266496s +Mar 7 03:31:28.334: INFO: Pod "execpod-affinity2xm9c" satisfied condition "running" +Mar 7 03:31:29.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' +Mar 7 03:31:29.518: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Mar 7 03:31:29.518: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:31:29.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.108.52.145 80' +Mar 7 03:31:29.693: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.108.52.145 80\nConnection to 10.108.52.145 80 port [tcp/http] succeeded!\n" +Mar 7 03:31:29.693: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:31:29.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.101 30739' +Mar 7 03:31:29.874: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.101 30739\nConnection to 192.168.1.101 30739 port [tcp/*] succeeded!\n" +Mar 7 03:31:29.874: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:31:29.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.102 30739' +Mar 7 03:31:30.060: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.102 30739\nConnection to 192.168.1.102 30739 port [tcp/*] succeeded!\n" +Mar 7 03:31:30.060: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:31:30.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://192.168.1.100:30739/ ; done' +Mar 7 03:31:30.317: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n" +Mar 7 03:31:30.317: INFO: stdout: "\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb" +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb +Mar 7 03:31:30.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://192.168.1.100:30739/' +Mar 7 03:31:30.513: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n" +Mar 7 03:31:30.513: INFO: stdout: "affinity-nodeport-timeout-mn9nb" +Mar 7 03:31:50.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://192.168.1.100:30739/' +Mar 7 03:31:50.698: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n" +Mar 7 03:31:50.698: INFO: stdout: "affinity-nodeport-timeout-qppfp" +Mar 7 03:31:50.698: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-92, will wait for the garbage collector to delete the pods 03/07/23 03:31:50.709 +Mar 7 03:31:50.770: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.910156ms +Mar 7 03:31:50.872: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 102.338461ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:31:52.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-92" for this suite. 03/07/23 03:31:52.999 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","completed":160,"skipped":2672,"failed":0} +------------------------------ +• [SLOW TEST] [32.028 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2221 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:31:20.978 + Mar 7 03:31:20.978: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:31:20.981 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:31:20.995 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:31:20.998 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2221 + STEP: creating service in namespace services-92 03/07/23 03:31:21.002 + Mar 7 03:31:21.008: INFO: Waiting up to 5m0s for pod "kube-proxy-mode-detector" in namespace "services-92" to be "running and ready" + Mar 7 03:31:21.012: INFO: Pod "kube-proxy-mode-detector": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126218ms + Mar 7 03:31:21.012: INFO: The phase of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:31:23.016: INFO: Pod "kube-proxy-mode-detector": Phase="Running", Reason="", readiness=true. Elapsed: 2.008284054s + Mar 7 03:31:23.016: INFO: The phase of Pod kube-proxy-mode-detector is Running (Ready = true) + Mar 7 03:31:23.016: INFO: Pod "kube-proxy-mode-detector" satisfied condition "running and ready" + Mar 7 03:31:23.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' + Mar 7 03:31:23.210: INFO: rc: 7 + Mar 7 03:31:23.220: INFO: Waiting for pod kube-proxy-mode-detector to disappear + Mar 7 03:31:23.228: INFO: Pod kube-proxy-mode-detector no longer exists + Mar 7 03:31:23.228: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode: + Command stdout: + + stderr: + + curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode + command terminated with exit code 7 + + error: + exit status 7 + STEP: creating service affinity-nodeport-timeout in namespace services-92 03/07/23 03:31:23.228 + STEP: creating replication controller affinity-nodeport-timeout in namespace services-92 03/07/23 03:31:23.251 + I0307 03:31:23.262801 22 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-92, replica count: 3 + I0307 03:31:26.313707 22 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 03:31:26.321: INFO: Creating new exec pod + Mar 7 03:31:26.326: INFO: Waiting up to 5m0s for pod "execpod-affinity2xm9c" in namespace "services-92" to be "running" + Mar 7 03:31:26.331: INFO: Pod "execpod-affinity2xm9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370852ms + Mar 7 03:31:28.334: INFO: Pod "execpod-affinity2xm9c": Phase="Running", Reason="", readiness=true. Elapsed: 2.008266496s + Mar 7 03:31:28.334: INFO: Pod "execpod-affinity2xm9c" satisfied condition "running" + Mar 7 03:31:29.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' + Mar 7 03:31:29.518: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" + Mar 7 03:31:29.518: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:31:29.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.108.52.145 80' + Mar 7 03:31:29.693: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.108.52.145 80\nConnection to 10.108.52.145 80 port [tcp/http] succeeded!\n" + Mar 7 03:31:29.693: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:31:29.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.101 30739' + Mar 7 03:31:29.874: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.101 30739\nConnection to 192.168.1.101 30739 port [tcp/*] succeeded!\n" + Mar 7 03:31:29.874: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:31:29.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.102 30739' + Mar 7 03:31:30.060: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.102 30739\nConnection to 192.168.1.102 30739 port [tcp/*] succeeded!\n" + Mar 7 03:31:30.060: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:31:30.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://192.168.1.100:30739/ ; done' + Mar 7 03:31:30.317: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n" + Mar 7 03:31:30.317: INFO: stdout: "\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb\naffinity-nodeport-timeout-mn9nb" + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.317: INFO: Received response from host: affinity-nodeport-timeout-mn9nb + Mar 7 03:31:30.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://192.168.1.100:30739/' + Mar 7 03:31:30.513: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n" + Mar 7 03:31:30.513: INFO: stdout: "affinity-nodeport-timeout-mn9nb" + Mar 7 03:31:50.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-92 exec execpod-affinity2xm9c -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://192.168.1.100:30739/' + Mar 7 03:31:50.698: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://192.168.1.100:30739/\n" + Mar 7 03:31:50.698: INFO: stdout: "affinity-nodeport-timeout-qppfp" + Mar 7 03:31:50.698: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-92, will wait for the garbage collector to delete the pods 03/07/23 03:31:50.709 + Mar 7 03:31:50.770: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.910156ms + Mar 7 03:31:50.872: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 102.338461ms + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:31:52.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-92" for this suite. 03/07/23 03:31:52.999 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:163 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:31:53.007 +Mar 7 03:31:53.007: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename disruption 03/07/23 03:31:53.008 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:31:53.024 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:31:53.027 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 +[It] should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:163 +STEP: Waiting for the pdb to be processed 03/07/23 03:31:53.032 +STEP: Updating PodDisruptionBudget status 03/07/23 03:31:55.041 +STEP: Waiting for all pods to be running 03/07/23 03:31:55.046 +Mar 7 03:31:55.048: INFO: running pods: 0 < 1 +STEP: locating a running pod 03/07/23 03:31:57.052 +STEP: Waiting for the pdb to be processed 03/07/23 03:31:57.06 +STEP: Patching PodDisruptionBudget status 03/07/23 03:31:57.064 +STEP: Waiting for the pdb to be processed 03/07/23 03:31:57.072 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 +Mar 7 03:31:57.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-1420" for this suite. 03/07/23 03:31:57.078 +{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","completed":161,"skipped":2701,"failed":0} +------------------------------ +• [4.077 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:163 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:31:53.007 + Mar 7 03:31:53.007: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename disruption 03/07/23 03:31:53.008 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:31:53.024 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:31:53.027 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 + [It] should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:163 + STEP: Waiting for the pdb to be processed 03/07/23 03:31:53.032 + STEP: Updating PodDisruptionBudget status 03/07/23 03:31:55.041 + STEP: Waiting for all pods to be running 03/07/23 03:31:55.046 + Mar 7 03:31:55.048: INFO: running pods: 0 < 1 + STEP: locating a running pod 03/07/23 03:31:57.052 + STEP: Waiting for the pdb to be processed 03/07/23 03:31:57.06 + STEP: Patching PodDisruptionBudget status 03/07/23 03:31:57.064 + STEP: Waiting for the pdb to be processed 03/07/23 03:31:57.072 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 + Mar 7 03:31:57.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "disruption-1420" for this suite. 03/07/23 03:31:57.078 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:385 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:31:57.084 +Mar 7 03:31:57.084: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:31:57.085 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:31:57.097 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:31:57.099 +[It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:385 +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:31:57.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6834" for this suite. 03/07/23 03:31:57.138 +{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","completed":162,"skipped":2714,"failed":0} +------------------------------ +• [0.058 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:385 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:31:57.084 + Mar 7 03:31:57.084: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:31:57.085 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:31:57.097 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:31:57.099 + [It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:385 + [AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:31:57.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-6834" for this suite. 03/07/23 03:31:57.138 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:43 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:31:57.144 +Mar 7 03:31:57.144: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:31:57.144 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:31:57.16 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:31:57.162 +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:43 +STEP: Creating a pod to test downward api env vars 03/07/23 03:31:57.164 +Mar 7 03:31:57.173: INFO: Waiting up to 5m0s for pod "downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41" in namespace "downward-api-5560" to be "Succeeded or Failed" +Mar 7 03:31:57.176: INFO: Pod "downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443057ms +Mar 7 03:31:59.179: INFO: Pod "downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006175662s +Mar 7 03:32:01.180: INFO: Pod "downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00713613s +STEP: Saw pod success 03/07/23 03:32:01.18 +Mar 7 03:32:01.180: INFO: Pod "downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41" satisfied condition "Succeeded or Failed" +Mar 7 03:32:01.183: INFO: Trying to get logs from node node-2 pod downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41 container dapi-container: +STEP: delete the pod 03/07/23 03:32:01.188 +Mar 7 03:32:01.198: INFO: Waiting for pod downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41 to disappear +Mar 7 03:32:01.201: INFO: Pod downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 +Mar 7 03:32:01.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5560" for this suite. 03/07/23 03:32:01.204 +{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","completed":163,"skipped":2758,"failed":0} +------------------------------ +• [4.065 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:43 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:31:57.144 + Mar 7 03:31:57.144: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:31:57.144 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:31:57.16 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:31:57.162 + [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:43 + STEP: Creating a pod to test downward api env vars 03/07/23 03:31:57.164 + Mar 7 03:31:57.173: INFO: Waiting up to 5m0s for pod "downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41" in namespace "downward-api-5560" to be "Succeeded or Failed" + Mar 7 03:31:57.176: INFO: Pod "downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443057ms + Mar 7 03:31:59.179: INFO: Pod "downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006175662s + Mar 7 03:32:01.180: INFO: Pod "downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00713613s + STEP: Saw pod success 03/07/23 03:32:01.18 + Mar 7 03:32:01.180: INFO: Pod "downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41" satisfied condition "Succeeded or Failed" + Mar 7 03:32:01.183: INFO: Trying to get logs from node node-2 pod downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41 container dapi-container: + STEP: delete the pod 03/07/23 03:32:01.188 + Mar 7 03:32:01.198: INFO: Waiting for pod downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41 to disappear + Mar 7 03:32:01.201: INFO: Pod downward-api-2cc69cda-aede-4990-b048-b3aaafbd3a41 no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 + Mar 7 03:32:01.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-5560" for this suite. 03/07/23 03:32:01.204 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:32:01.209 +Mar 7 03:32:01.210: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename runtimeclass 03/07/23 03:32:01.21 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:01.224 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:01.228 +[It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 +Mar 7 03:32:01.240: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-4309 to be scheduled +Mar 7 03:32:01.242: INFO: 1 pods are not scheduled: [runtimeclass-4309/test-runtimeclass-runtimeclass-4309-preconfigured-handler-4k92c(fc14cb7c-acd0-4a2d-8235-abe70b361568)] +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 +Mar 7 03:32:03.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-4309" for this suite. 03/07/23 03:32:03.252 +{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]","completed":164,"skipped":2788,"failed":0} +------------------------------ +• [2.047 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:32:01.209 + Mar 7 03:32:01.210: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename runtimeclass 03/07/23 03:32:01.21 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:01.224 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:01.228 + [It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 + Mar 7 03:32:01.240: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-4309 to be scheduled + Mar 7 03:32:01.242: INFO: 1 pods are not scheduled: [runtimeclass-4309/test-runtimeclass-runtimeclass-4309-preconfigured-handler-4k92c(fc14cb7c-acd0-4a2d-8235-abe70b361568)] + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 + Mar 7 03:32:03.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "runtimeclass-4309" for this suite. 03/07/23 03:32:03.252 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:192 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:32:03.257 +Mar 7 03:32:03.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:32:03.258 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:03.27 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:03.273 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:192 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:32:03.275 +Mar 7 03:32:03.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc" in namespace "projected-4714" to be "Succeeded or Failed" +Mar 7 03:32:03.284: INFO: Pod "downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.629857ms +Mar 7 03:32:05.288: INFO: Pod "downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007241176s +Mar 7 03:32:07.287: INFO: Pod "downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005951173s +STEP: Saw pod success 03/07/23 03:32:07.287 +Mar 7 03:32:07.287: INFO: Pod "downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc" satisfied condition "Succeeded or Failed" +Mar 7 03:32:07.289: INFO: Trying to get logs from node node-2 pod downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc container client-container: +STEP: delete the pod 03/07/23 03:32:07.294 +Mar 7 03:32:07.303: INFO: Waiting for pod downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc to disappear +Mar 7 03:32:07.305: INFO: Pod downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 03:32:07.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4714" for this suite. 03/07/23 03:32:07.308 +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","completed":165,"skipped":2792,"failed":0} +------------------------------ +• [4.056 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:192 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:32:03.257 + Mar 7 03:32:03.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:32:03.258 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:03.27 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:03.273 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:192 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:32:03.275 + Mar 7 03:32:03.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc" in namespace "projected-4714" to be "Succeeded or Failed" + Mar 7 03:32:03.284: INFO: Pod "downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.629857ms + Mar 7 03:32:05.288: INFO: Pod "downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007241176s + Mar 7 03:32:07.287: INFO: Pod "downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005951173s + STEP: Saw pod success 03/07/23 03:32:07.287 + Mar 7 03:32:07.287: INFO: Pod "downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc" satisfied condition "Succeeded or Failed" + Mar 7 03:32:07.289: INFO: Trying to get logs from node node-2 pod downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc container client-container: + STEP: delete the pod 03/07/23 03:32:07.294 + Mar 7 03:32:07.303: INFO: Waiting for pod downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc to disappear + Mar 7 03:32:07.305: INFO: Pod downwardapi-volume-be8329ee-074c-44d2-8087-dc93d53d2edc no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 03:32:07.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-4714" for this suite. 03/07/23 03:32:07.308 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:216 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:32:07.313 +Mar 7 03:32:07.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:32:07.314 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:07.327 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:07.329 +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:216 +STEP: Creating a pod to test downward api env vars 03/07/23 03:32:07.33 +Mar 7 03:32:07.336: INFO: Waiting up to 5m0s for pod "downward-api-a6568988-ca1b-4657-a563-9e6d6981f715" in namespace "downward-api-1040" to be "Succeeded or Failed" +Mar 7 03:32:07.338: INFO: Pod "downward-api-a6568988-ca1b-4657-a563-9e6d6981f715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140518ms +Mar 7 03:32:09.342: INFO: Pod "downward-api-a6568988-ca1b-4657-a563-9e6d6981f715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005458488s +Mar 7 03:32:11.341: INFO: Pod "downward-api-a6568988-ca1b-4657-a563-9e6d6981f715": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005083762s +STEP: Saw pod success 03/07/23 03:32:11.341 +Mar 7 03:32:11.342: INFO: Pod "downward-api-a6568988-ca1b-4657-a563-9e6d6981f715" satisfied condition "Succeeded or Failed" +Mar 7 03:32:11.344: INFO: Trying to get logs from node node-2 pod downward-api-a6568988-ca1b-4657-a563-9e6d6981f715 container dapi-container: +STEP: delete the pod 03/07/23 03:32:11.348 +Mar 7 03:32:11.356: INFO: Waiting for pod downward-api-a6568988-ca1b-4657-a563-9e6d6981f715 to disappear +Mar 7 03:32:11.358: INFO: Pod downward-api-a6568988-ca1b-4657-a563-9e6d6981f715 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 +Mar 7 03:32:11.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1040" for this suite. 03/07/23 03:32:11.361 +{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","completed":166,"skipped":2817,"failed":0} +------------------------------ +• [4.052 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:216 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:32:07.313 + Mar 7 03:32:07.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:32:07.314 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:07.327 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:07.329 + [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:216 + STEP: Creating a pod to test downward api env vars 03/07/23 03:32:07.33 + Mar 7 03:32:07.336: INFO: Waiting up to 5m0s for pod "downward-api-a6568988-ca1b-4657-a563-9e6d6981f715" in namespace "downward-api-1040" to be "Succeeded or Failed" + Mar 7 03:32:07.338: INFO: Pod "downward-api-a6568988-ca1b-4657-a563-9e6d6981f715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140518ms + Mar 7 03:32:09.342: INFO: Pod "downward-api-a6568988-ca1b-4657-a563-9e6d6981f715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005458488s + Mar 7 03:32:11.341: INFO: Pod "downward-api-a6568988-ca1b-4657-a563-9e6d6981f715": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005083762s + STEP: Saw pod success 03/07/23 03:32:11.341 + Mar 7 03:32:11.342: INFO: Pod "downward-api-a6568988-ca1b-4657-a563-9e6d6981f715" satisfied condition "Succeeded or Failed" + Mar 7 03:32:11.344: INFO: Trying to get logs from node node-2 pod downward-api-a6568988-ca1b-4657-a563-9e6d6981f715 container dapi-container: + STEP: delete the pod 03/07/23 03:32:11.348 + Mar 7 03:32:11.356: INFO: Waiting for pod downward-api-a6568988-ca1b-4657-a563-9e6d6981f715 to disappear + Mar 7 03:32:11.358: INFO: Pod downward-api-a6568988-ca1b-4657-a563-9e6d6981f715 no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 + Mar 7 03:32:11.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-1040" for this suite. 03/07/23 03:32:11.361 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:32:11.366 +Mar 7 03:32:11.366: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubelet-test 03/07/23 03:32:11.367 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:11.379 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:11.382 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 +Mar 7 03:32:11.390: INFO: Waiting up to 5m0s for pod "busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f" in namespace "kubelet-test-3763" to be "running and ready" +Mar 7 03:32:11.393: INFO: Pod "busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566275ms +Mar 7 03:32:11.393: INFO: The phase of Pod busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:32:13.396: INFO: Pod "busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f": Phase="Running", Reason="", readiness=true. Elapsed: 2.005465887s +Mar 7 03:32:13.396: INFO: The phase of Pod busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f is Running (Ready = true) +Mar 7 03:32:13.396: INFO: Pod "busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f" satisfied condition "running and ready" +[AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 +Mar 7 03:32:13.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-3763" for this suite. 03/07/23 03:32:13.405 +{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","completed":167,"skipped":2818,"failed":0} +------------------------------ +• [2.044 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command in a pod + test/e2e/common/node/kubelet.go:44 + should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:32:11.366 + Mar 7 03:32:11.366: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubelet-test 03/07/23 03:32:11.367 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:11.379 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:11.382 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 + Mar 7 03:32:11.390: INFO: Waiting up to 5m0s for pod "busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f" in namespace "kubelet-test-3763" to be "running and ready" + Mar 7 03:32:11.393: INFO: Pod "busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566275ms + Mar 7 03:32:11.393: INFO: The phase of Pod busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:32:13.396: INFO: Pod "busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f": Phase="Running", Reason="", readiness=true. Elapsed: 2.005465887s + Mar 7 03:32:13.396: INFO: The phase of Pod busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f is Running (Ready = true) + Mar 7 03:32:13.396: INFO: Pod "busybox-scheduling-690f28d7-96c7-437f-825e-7ed20145f06f" satisfied condition "running and ready" + [AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 + Mar 7 03:32:13.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubelet-test-3763" for this suite. 03/07/23 03:32:13.405 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1523 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:32:13.412 +Mar 7 03:32:13.412: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:32:13.413 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:13.425 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:13.427 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1523 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-734 03/07/23 03:32:13.429 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 03/07/23 03:32:13.444 +STEP: creating service externalsvc in namespace services-734 03/07/23 03:32:13.444 +STEP: creating replication controller externalsvc in namespace services-734 03/07/23 03:32:13.465 +I0307 03:32:13.477794 22 runners.go:193] Created replication controller with name: externalsvc, namespace: services-734, replica count: 2 +I0307 03:32:16.529538 22 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName 03/07/23 03:32:16.533 +Mar 7 03:32:16.555: INFO: Creating new exec pod +Mar 7 03:32:16.567: INFO: Waiting up to 5m0s for pod "execpodqrddz" in namespace "services-734" to be "running" +Mar 7 03:32:16.575: INFO: Pod "execpodqrddz": Phase="Pending", Reason="", readiness=false. Elapsed: 7.20217ms +Mar 7 03:32:18.578: INFO: Pod "execpodqrddz": Phase="Running", Reason="", readiness=true. Elapsed: 2.010185446s +Mar 7 03:32:18.578: INFO: Pod "execpodqrddz" satisfied condition "running" +Mar 7 03:32:18.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-734 exec execpodqrddz -- /bin/sh -x -c nslookup nodeport-service.services-734.svc.cluster.local' +Mar 7 03:32:18.800: INFO: stderr: "+ nslookup nodeport-service.services-734.svc.cluster.local\n" +Mar 7 03:32:18.800: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-734.svc.cluster.local\tcanonical name = externalsvc.services-734.svc.cluster.local.\nName:\texternalsvc.services-734.svc.cluster.local\nAddress: 10.103.164.125\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-734, will wait for the garbage collector to delete the pods 03/07/23 03:32:18.8 +Mar 7 03:32:18.859: INFO: Deleting ReplicationController externalsvc took: 4.362357ms +Mar 7 03:32:18.959: INFO: Terminating ReplicationController externalsvc pods took: 100.575271ms +Mar 7 03:32:21.079: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:32:21.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-734" for this suite. 03/07/23 03:32:21.094 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","completed":168,"skipped":2879,"failed":0} +------------------------------ +• [SLOW TEST] [7.688 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1523 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:32:13.412 + Mar 7 03:32:13.412: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:32:13.413 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:13.425 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:13.427 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1523 + STEP: creating a service nodeport-service with the type=NodePort in namespace services-734 03/07/23 03:32:13.429 + STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 03/07/23 03:32:13.444 + STEP: creating service externalsvc in namespace services-734 03/07/23 03:32:13.444 + STEP: creating replication controller externalsvc in namespace services-734 03/07/23 03:32:13.465 + I0307 03:32:13.477794 22 runners.go:193] Created replication controller with name: externalsvc, namespace: services-734, replica count: 2 + I0307 03:32:16.529538 22 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + STEP: changing the NodePort service to type=ExternalName 03/07/23 03:32:16.533 + Mar 7 03:32:16.555: INFO: Creating new exec pod + Mar 7 03:32:16.567: INFO: Waiting up to 5m0s for pod "execpodqrddz" in namespace "services-734" to be "running" + Mar 7 03:32:16.575: INFO: Pod "execpodqrddz": Phase="Pending", Reason="", readiness=false. Elapsed: 7.20217ms + Mar 7 03:32:18.578: INFO: Pod "execpodqrddz": Phase="Running", Reason="", readiness=true. Elapsed: 2.010185446s + Mar 7 03:32:18.578: INFO: Pod "execpodqrddz" satisfied condition "running" + Mar 7 03:32:18.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-734 exec execpodqrddz -- /bin/sh -x -c nslookup nodeport-service.services-734.svc.cluster.local' + Mar 7 03:32:18.800: INFO: stderr: "+ nslookup nodeport-service.services-734.svc.cluster.local\n" + Mar 7 03:32:18.800: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-734.svc.cluster.local\tcanonical name = externalsvc.services-734.svc.cluster.local.\nName:\texternalsvc.services-734.svc.cluster.local\nAddress: 10.103.164.125\n\n" + STEP: deleting ReplicationController externalsvc in namespace services-734, will wait for the garbage collector to delete the pods 03/07/23 03:32:18.8 + Mar 7 03:32:18.859: INFO: Deleting ReplicationController externalsvc took: 4.362357ms + Mar 7 03:32:18.959: INFO: Terminating ReplicationController externalsvc pods took: 100.575271ms + Mar 7 03:32:21.079: INFO: Cleaning up the NodePort to ExternalName test service + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:32:21.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-734" for this suite. 03/07/23 03:32:21.094 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:88 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:32:21.101 +Mar 7 03:32:21.101: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:32:21.102 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:21.116 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:21.118 +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:88 +STEP: Creating secret with name secret-test-map-04f16318-435f-42d0-a519-7e43db6baf66 03/07/23 03:32:21.12 +STEP: Creating a pod to test consume secrets 03/07/23 03:32:21.124 +Mar 7 03:32:21.132: INFO: Waiting up to 5m0s for pod "pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a" in namespace "secrets-4909" to be "Succeeded or Failed" +Mar 7 03:32:21.146: INFO: Pod "pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.775486ms +Mar 7 03:32:23.149: INFO: Pod "pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01692832s +Mar 7 03:32:25.149: INFO: Pod "pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017097115s +STEP: Saw pod success 03/07/23 03:32:25.149 +Mar 7 03:32:25.149: INFO: Pod "pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a" satisfied condition "Succeeded or Failed" +Mar 7 03:32:25.151: INFO: Trying to get logs from node node-2 pod pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a container secret-volume-test: +STEP: delete the pod 03/07/23 03:32:25.156 +Mar 7 03:32:25.164: INFO: Waiting for pod pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a to disappear +Mar 7 03:32:25.167: INFO: Pod pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:32:25.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4909" for this suite. 03/07/23 03:32:25.17 +{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","completed":169,"skipped":2884,"failed":0} +------------------------------ +• [4.073 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:88 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:32:21.101 + Mar 7 03:32:21.101: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:32:21.102 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:21.116 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:21.118 + [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:88 + STEP: Creating secret with name secret-test-map-04f16318-435f-42d0-a519-7e43db6baf66 03/07/23 03:32:21.12 + STEP: Creating a pod to test consume secrets 03/07/23 03:32:21.124 + Mar 7 03:32:21.132: INFO: Waiting up to 5m0s for pod "pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a" in namespace "secrets-4909" to be "Succeeded or Failed" + Mar 7 03:32:21.146: INFO: Pod "pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.775486ms + Mar 7 03:32:23.149: INFO: Pod "pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01692832s + Mar 7 03:32:25.149: INFO: Pod "pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017097115s + STEP: Saw pod success 03/07/23 03:32:25.149 + Mar 7 03:32:25.149: INFO: Pod "pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a" satisfied condition "Succeeded or Failed" + Mar 7 03:32:25.151: INFO: Trying to get logs from node node-2 pod pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a container secret-volume-test: + STEP: delete the pod 03/07/23 03:32:25.156 + Mar 7 03:32:25.164: INFO: Waiting for pod pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a to disappear + Mar 7 03:32:25.167: INFO: Pod pod-secrets-81df6e24-a0cf-40eb-a6d7-1e6e5b996d9a no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:32:25.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-4909" for this suite. 03/07/23 03:32:25.17 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:32:25.175 +Mar 7 03:32:25.175: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename watch 03/07/23 03:32:25.176 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:25.191 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:25.194 +[It] should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 +STEP: getting a starting resourceVersion 03/07/23 03:32:25.196 +STEP: starting a background goroutine to produce watch events 03/07/23 03:32:25.198 +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order 03/07/23 03:32:25.198 +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 +Mar 7 03:32:27.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-1893" for this suite. 03/07/23 03:32:28.031 +{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","completed":170,"skipped":2888,"failed":0} +------------------------------ +• [2.908 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:32:25.175 + Mar 7 03:32:25.175: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename watch 03/07/23 03:32:25.176 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:25.191 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:25.194 + [It] should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 + STEP: getting a starting resourceVersion 03/07/23 03:32:25.196 + STEP: starting a background goroutine to produce watch events 03/07/23 03:32:25.198 + STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order 03/07/23 03:32:25.198 + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 + Mar 7 03:32:27.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "watch-1893" for this suite. 03/07/23 03:32:28.031 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:304 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:32:28.083 +Mar 7 03:32:28.083: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename statefulset 03/07/23 03:32:28.084 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:28.103 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:28.105 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-5609 03/07/23 03:32:28.106 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:304 +STEP: Creating a new StatefulSet 03/07/23 03:32:28.11 +Mar 7 03:32:28.121: INFO: Found 0 stateful pods, waiting for 3 +Mar 7 03:32:38.127: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 03:32:38.127: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 03:32:38.127: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 03:32:38.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-5609 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Mar 7 03:32:38.354: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Mar 7 03:32:38.354: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Mar 7 03:32:38.354: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-2 to registry.k8s.io/e2e-test-images/httpd:2.4.39-2 03/07/23 03:32:48.379 +Mar 7 03:32:48.396: INFO: Updating stateful set ss2 +STEP: Creating a new revision 03/07/23 03:32:48.396 +STEP: Updating Pods in reverse ordinal order 03/07/23 03:32:58.412 +Mar 7 03:32:58.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-5609 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Mar 7 03:32:58.593: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Mar 7 03:32:58.593: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Mar 7 03:32:58.593: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +STEP: Rolling back to a previous revision 03/07/23 03:33:08.612 +Mar 7 03:33:08.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-5609 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Mar 7 03:33:08.821: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Mar 7 03:33:08.821: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Mar 7 03:33:08.821: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Mar 7 03:33:18.852: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order 03/07/23 03:33:28.865 +Mar 7 03:33:28.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-5609 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Mar 7 03:33:29.047: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Mar 7 03:33:29.047: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Mar 7 03:33:29.047: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Mar 7 03:33:39.063: INFO: Deleting all statefulset in ns statefulset-5609 +Mar 7 03:33:39.073: INFO: Scaling statefulset ss2 to 0 +Mar 7 03:33:49.111: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 03:33:49.113: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +Mar 7 03:33:49.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5609" for this suite. 03/07/23 03:33:49.146 +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","completed":171,"skipped":2894,"failed":0} +------------------------------ +• [SLOW TEST] [81.068 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:304 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:32:28.083 + Mar 7 03:32:28.083: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename statefulset 03/07/23 03:32:28.084 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:32:28.103 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:32:28.105 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 + STEP: Creating service test in namespace statefulset-5609 03/07/23 03:32:28.106 + [It] should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:304 + STEP: Creating a new StatefulSet 03/07/23 03:32:28.11 + Mar 7 03:32:28.121: INFO: Found 0 stateful pods, waiting for 3 + Mar 7 03:32:38.127: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 03:32:38.127: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 03:32:38.127: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 03:32:38.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-5609 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Mar 7 03:32:38.354: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Mar 7 03:32:38.354: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Mar 7 03:32:38.354: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + STEP: Updating StatefulSet template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-2 to registry.k8s.io/e2e-test-images/httpd:2.4.39-2 03/07/23 03:32:48.379 + Mar 7 03:32:48.396: INFO: Updating stateful set ss2 + STEP: Creating a new revision 03/07/23 03:32:48.396 + STEP: Updating Pods in reverse ordinal order 03/07/23 03:32:58.412 + Mar 7 03:32:58.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-5609 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Mar 7 03:32:58.593: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Mar 7 03:32:58.593: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Mar 7 03:32:58.593: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + STEP: Rolling back to a previous revision 03/07/23 03:33:08.612 + Mar 7 03:33:08.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-5609 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Mar 7 03:33:08.821: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Mar 7 03:33:08.821: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Mar 7 03:33:08.821: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Mar 7 03:33:18.852: INFO: Updating stateful set ss2 + STEP: Rolling back update in reverse ordinal order 03/07/23 03:33:28.865 + Mar 7 03:33:28.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-5609 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Mar 7 03:33:29.047: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Mar 7 03:33:29.047: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Mar 7 03:33:29.047: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 + Mar 7 03:33:39.063: INFO: Deleting all statefulset in ns statefulset-5609 + Mar 7 03:33:39.073: INFO: Scaling statefulset ss2 to 0 + Mar 7 03:33:49.111: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 03:33:49.113: INFO: Deleting statefulset ss2 + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 + Mar 7 03:33:49.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "statefulset-5609" for this suite. 03/07/23 03:33:49.146 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:392 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:33:49.152 +Mar 7 03:33:49.152: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:33:49.153 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:33:49.175 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:33:49.178 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:392 +STEP: creating all guestbook components 03/07/23 03:33:49.181 +Mar 7 03:33:49.182: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Mar 7 03:33:49.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' +Mar 7 03:33:49.984: INFO: stderr: "" +Mar 7 03:33:49.984: INFO: stdout: "service/agnhost-replica created\n" +Mar 7 03:33:49.984: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Mar 7 03:33:49.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' +Mar 7 03:33:50.817: INFO: stderr: "" +Mar 7 03:33:50.817: INFO: stdout: "service/agnhost-primary created\n" +Mar 7 03:33:50.817: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Mar 7 03:33:50.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' +Mar 7 03:33:51.074: INFO: stderr: "" +Mar 7 03:33:51.074: INFO: stdout: "service/frontend created\n" +Mar 7 03:33:51.074: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: registry.k8s.io/e2e-test-images/agnhost:2.40 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Mar 7 03:33:51.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' +Mar 7 03:33:51.331: INFO: stderr: "" +Mar 7 03:33:51.331: INFO: stdout: "deployment.apps/frontend created\n" +Mar 7 03:33:51.331: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: registry.k8s.io/e2e-test-images/agnhost:2.40 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Mar 7 03:33:51.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' +Mar 7 03:33:51.586: INFO: stderr: "" +Mar 7 03:33:51.586: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Mar 7 03:33:51.586: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: registry.k8s.io/e2e-test-images/agnhost:2.40 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Mar 7 03:33:51.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' +Mar 7 03:33:51.863: INFO: stderr: "" +Mar 7 03:33:51.863: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app 03/07/23 03:33:51.863 +Mar 7 03:33:51.863: INFO: Waiting for all frontend pods to be Running. +Mar 7 03:33:56.917: INFO: Waiting for frontend to serve content. +Mar 7 03:33:56.925: INFO: Trying to add a new entry to the guestbook. +Mar 7 03:33:56.932: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources 03/07/23 03:33:56.939 +Mar 7 03:33:56.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' +Mar 7 03:33:57.053: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Mar 7 03:33:57.053: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources 03/07/23 03:33:57.053 +Mar 7 03:33:57.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' +Mar 7 03:33:57.202: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Mar 7 03:33:57.202: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources 03/07/23 03:33:57.202 +Mar 7 03:33:57.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' +Mar 7 03:33:57.329: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Mar 7 03:33:57.329: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources 03/07/23 03:33:57.329 +Mar 7 03:33:57.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' +Mar 7 03:33:57.426: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Mar 7 03:33:57.426: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources 03/07/23 03:33:57.426 +Mar 7 03:33:57.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' +Mar 7 03:33:57.559: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Mar 7 03:33:57.559: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources 03/07/23 03:33:57.559 +Mar 7 03:33:57.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' +Mar 7 03:33:57.706: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Mar 7 03:33:57.706: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:33:57.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-851" for this suite. 03/07/23 03:33:57.711 +{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","completed":172,"skipped":2907,"failed":0} +------------------------------ +• [SLOW TEST] [8.572 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Guestbook application + test/e2e/kubectl/kubectl.go:367 + should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:392 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:33:49.152 + Mar 7 03:33:49.152: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:33:49.153 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:33:49.175 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:33:49.178 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:392 + STEP: creating all guestbook components 03/07/23 03:33:49.181 + Mar 7 03:33:49.182: INFO: apiVersion: v1 + kind: Service + metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend + spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + + Mar 7 03:33:49.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' + Mar 7 03:33:49.984: INFO: stderr: "" + Mar 7 03:33:49.984: INFO: stdout: "service/agnhost-replica created\n" + Mar 7 03:33:49.984: INFO: apiVersion: v1 + kind: Service + metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend + spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + + Mar 7 03:33:49.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' + Mar 7 03:33:50.817: INFO: stderr: "" + Mar 7 03:33:50.817: INFO: stdout: "service/agnhost-primary created\n" + Mar 7 03:33:50.817: INFO: apiVersion: v1 + kind: Service + metadata: + name: frontend + labels: + app: guestbook + tier: frontend + spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + + Mar 7 03:33:50.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' + Mar 7 03:33:51.074: INFO: stderr: "" + Mar 7 03:33:51.074: INFO: stdout: "service/frontend created\n" + Mar 7 03:33:51.074: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: frontend + spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: registry.k8s.io/e2e-test-images/agnhost:2.40 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + + Mar 7 03:33:51.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' + Mar 7 03:33:51.331: INFO: stderr: "" + Mar 7 03:33:51.331: INFO: stdout: "deployment.apps/frontend created\n" + Mar 7 03:33:51.331: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: agnhost-primary + spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: registry.k8s.io/e2e-test-images/agnhost:2.40 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + + Mar 7 03:33:51.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' + Mar 7 03:33:51.586: INFO: stderr: "" + Mar 7 03:33:51.586: INFO: stdout: "deployment.apps/agnhost-primary created\n" + Mar 7 03:33:51.586: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: agnhost-replica + spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: registry.k8s.io/e2e-test-images/agnhost:2.40 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + + Mar 7 03:33:51.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 create -f -' + Mar 7 03:33:51.863: INFO: stderr: "" + Mar 7 03:33:51.863: INFO: stdout: "deployment.apps/agnhost-replica created\n" + STEP: validating guestbook app 03/07/23 03:33:51.863 + Mar 7 03:33:51.863: INFO: Waiting for all frontend pods to be Running. + Mar 7 03:33:56.917: INFO: Waiting for frontend to serve content. + Mar 7 03:33:56.925: INFO: Trying to add a new entry to the guestbook. + Mar 7 03:33:56.932: INFO: Verifying that added entry can be retrieved. + STEP: using delete to clean up resources 03/07/23 03:33:56.939 + Mar 7 03:33:56.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' + Mar 7 03:33:57.053: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Mar 7 03:33:57.053: INFO: stdout: "service \"agnhost-replica\" force deleted\n" + STEP: using delete to clean up resources 03/07/23 03:33:57.053 + Mar 7 03:33:57.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' + Mar 7 03:33:57.202: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Mar 7 03:33:57.202: INFO: stdout: "service \"agnhost-primary\" force deleted\n" + STEP: using delete to clean up resources 03/07/23 03:33:57.202 + Mar 7 03:33:57.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' + Mar 7 03:33:57.329: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Mar 7 03:33:57.329: INFO: stdout: "service \"frontend\" force deleted\n" + STEP: using delete to clean up resources 03/07/23 03:33:57.329 + Mar 7 03:33:57.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' + Mar 7 03:33:57.426: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Mar 7 03:33:57.426: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" + STEP: using delete to clean up resources 03/07/23 03:33:57.426 + Mar 7 03:33:57.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' + Mar 7 03:33:57.559: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Mar 7 03:33:57.559: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" + STEP: using delete to clean up resources 03/07/23 03:33:57.559 + Mar 7 03:33:57.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete --grace-period=0 --force -f -' + Mar 7 03:33:57.706: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Mar 7 03:33:57.706: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:33:57.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-851" for this suite. 03/07/23 03:33:57.711 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:276 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:33:57.725 +Mar 7 03:33:57.726: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:33:57.726 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:33:57.761 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:33:57.765 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:33:57.786 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:33:58.37 +STEP: Deploying the webhook pod 03/07/23 03:33:58.376 +STEP: Wait for the deployment to be ready 03/07/23 03:33:58.389 +Mar 7 03:33:58.395: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Mar 7 03:34:00.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 3, 33, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 33, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 3, 33, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 33, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5d85dd8cdb\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service 03/07/23 03:34:02.406 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:34:02.418 +Mar 7 03:34:03.418: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:276 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 03/07/23 03:34:03.421 +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 03/07/23 03:34:03.432 +STEP: Creating a dummy validating-webhook-configuration object 03/07/23 03:34:03.442 +STEP: Deleting the validating-webhook-configuration, which should be possible to remove 03/07/23 03:34:03.447 +STEP: Creating a dummy mutating-webhook-configuration object 03/07/23 03:34:03.451 +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove 03/07/23 03:34:03.457 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:34:03.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8438" for this suite. 03/07/23 03:34:03.472 +STEP: Destroying namespace "webhook-8438-markers" for this suite. 03/07/23 03:34:03.476 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","completed":173,"skipped":2924,"failed":0} +------------------------------ +• [SLOW TEST] [5.805 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:276 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:33:57.725 + Mar 7 03:33:57.726: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:33:57.726 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:33:57.761 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:33:57.765 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:33:57.786 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:33:58.37 + STEP: Deploying the webhook pod 03/07/23 03:33:58.376 + STEP: Wait for the deployment to be ready 03/07/23 03:33:58.389 + Mar 7 03:33:58.395: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + Mar 7 03:34:00.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 3, 33, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 33, 58, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 3, 33, 58, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 33, 58, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5d85dd8cdb\" is progressing."}}, CollisionCount:(*int32)(nil)} + STEP: Deploying the webhook service 03/07/23 03:34:02.406 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:34:02.418 + Mar 7 03:34:03.418: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:276 + STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 03/07/23 03:34:03.421 + STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 03/07/23 03:34:03.432 + STEP: Creating a dummy validating-webhook-configuration object 03/07/23 03:34:03.442 + STEP: Deleting the validating-webhook-configuration, which should be possible to remove 03/07/23 03:34:03.447 + STEP: Creating a dummy mutating-webhook-configuration object 03/07/23 03:34:03.451 + STEP: Deleting the mutating-webhook-configuration, which should be possible to remove 03/07/23 03:34:03.457 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:34:03.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-8438" for this suite. 03/07/23 03:34:03.472 + STEP: Destroying namespace "webhook-8438-markers" for this suite. 03/07/23 03:34:03.476 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:03.531 +Mar 7 03:34:03.531: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename runtimeclass 03/07/23 03:34:03.532 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:03.594 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:03.597 +[It] should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 +STEP: getting /apis 03/07/23 03:34:03.6 +STEP: getting /apis/node.k8s.io 03/07/23 03:34:03.602 +STEP: getting /apis/node.k8s.io/v1 03/07/23 03:34:03.602 +STEP: creating 03/07/23 03:34:03.603 +STEP: watching 03/07/23 03:34:03.637 +Mar 7 03:34:03.638: INFO: starting watch +STEP: getting 03/07/23 03:34:03.643 +STEP: listing 03/07/23 03:34:03.646 +STEP: patching 03/07/23 03:34:03.648 +STEP: updating 03/07/23 03:34:03.653 +Mar 7 03:34:03.660: INFO: waiting for watch events with expected annotations +STEP: deleting 03/07/23 03:34:03.66 +STEP: deleting a collection 03/07/23 03:34:03.669 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 +Mar 7 03:34:03.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-8957" for this suite. 03/07/23 03:34:03.684 +{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","completed":174,"skipped":2934,"failed":0} +------------------------------ +• [0.159 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:03.531 + Mar 7 03:34:03.531: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename runtimeclass 03/07/23 03:34:03.532 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:03.594 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:03.597 + [It] should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 + STEP: getting /apis 03/07/23 03:34:03.6 + STEP: getting /apis/node.k8s.io 03/07/23 03:34:03.602 + STEP: getting /apis/node.k8s.io/v1 03/07/23 03:34:03.602 + STEP: creating 03/07/23 03:34:03.603 + STEP: watching 03/07/23 03:34:03.637 + Mar 7 03:34:03.638: INFO: starting watch + STEP: getting 03/07/23 03:34:03.643 + STEP: listing 03/07/23 03:34:03.646 + STEP: patching 03/07/23 03:34:03.648 + STEP: updating 03/07/23 03:34:03.653 + Mar 7 03:34:03.660: INFO: waiting for watch events with expected annotations + STEP: deleting 03/07/23 03:34:03.66 + STEP: deleting a collection 03/07/23 03:34:03.669 + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 + Mar 7 03:34:03.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "runtimeclass-8957" for this suite. 03/07/23 03:34:03.684 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:03.693 +Mar 7 03:34:03.693: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename runtimeclass 03/07/23 03:34:03.694 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:03.715 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:03.717 +[It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 +STEP: Deleting RuntimeClass runtimeclass-5356-delete-me 03/07/23 03:34:03.726 +STEP: Waiting for the RuntimeClass to disappear 03/07/23 03:34:03.731 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 +Mar 7 03:34:03.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-5356" for this suite. 03/07/23 03:34:03.75 +{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]","completed":175,"skipped":2979,"failed":0} +------------------------------ +• [0.073 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:03.693 + Mar 7 03:34:03.693: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename runtimeclass 03/07/23 03:34:03.694 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:03.715 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:03.717 + [It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 + STEP: Deleting RuntimeClass runtimeclass-5356-delete-me 03/07/23 03:34:03.726 + STEP: Waiting for the RuntimeClass to disappear 03/07/23 03:34:03.731 + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 + Mar 7 03:34:03.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "runtimeclass-5356" for this suite. 03/07/23 03:34:03.75 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + test/e2e/common/node/secrets.go:153 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:03.766 +Mar 7 03:34:03.766: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:34:03.767 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:03.794 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:03.797 +[It] should patch a secret [Conformance] + test/e2e/common/node/secrets.go:153 +STEP: creating a secret 03/07/23 03:34:03.799 +STEP: listing secrets in all namespaces to ensure that there are more than zero 03/07/23 03:34:03.809 +STEP: patching the secret 03/07/23 03:34:03.839 +STEP: deleting the secret using a LabelSelector 03/07/23 03:34:03.851 +STEP: listing secrets in all namespaces, searching for label name and value in patch 03/07/23 03:34:03.862 +[AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:34:03.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4449" for this suite. 03/07/23 03:34:03.87 +{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","completed":176,"skipped":2988,"failed":0} +------------------------------ +• [0.110 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should patch a secret [Conformance] + test/e2e/common/node/secrets.go:153 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:03.766 + Mar 7 03:34:03.766: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:34:03.767 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:03.794 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:03.797 + [It] should patch a secret [Conformance] + test/e2e/common/node/secrets.go:153 + STEP: creating a secret 03/07/23 03:34:03.799 + STEP: listing secrets in all namespaces to ensure that there are more than zero 03/07/23 03:34:03.809 + STEP: patching the secret 03/07/23 03:34:03.839 + STEP: deleting the secret using a LabelSelector 03/07/23 03:34:03.851 + STEP: listing secrets in all namespaces, searching for label name and value in patch 03/07/23 03:34:03.862 + [AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:34:03.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-4449" for this suite. 03/07/23 03:34:03.87 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:220 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:03.876 +Mar 7 03:34:03.877: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:34:03.878 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:03.921 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:03.925 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:220 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:34:03.927 +Mar 7 03:34:03.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7" in namespace "projected-7379" to be "Succeeded or Failed" +Mar 7 03:34:03.955: INFO: Pod "downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.400749ms +Mar 7 03:34:05.959: INFO: Pod "downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7": Phase="Running", Reason="", readiness=false. Elapsed: 2.007458406s +Mar 7 03:34:07.959: INFO: Pod "downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007605251s +STEP: Saw pod success 03/07/23 03:34:07.959 +Mar 7 03:34:07.959: INFO: Pod "downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7" satisfied condition "Succeeded or Failed" +Mar 7 03:34:07.962: INFO: Trying to get logs from node node-2 pod downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7 container client-container: +STEP: delete the pod 03/07/23 03:34:07.974 +Mar 7 03:34:07.982: INFO: Waiting for pod downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7 to disappear +Mar 7 03:34:07.991: INFO: Pod downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 03:34:07.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7379" for this suite. 03/07/23 03:34:07.995 +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","completed":177,"skipped":2988,"failed":0} +------------------------------ +• [4.123 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:220 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:03.876 + Mar 7 03:34:03.877: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:34:03.878 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:03.921 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:03.925 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:220 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:34:03.927 + Mar 7 03:34:03.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7" in namespace "projected-7379" to be "Succeeded or Failed" + Mar 7 03:34:03.955: INFO: Pod "downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.400749ms + Mar 7 03:34:05.959: INFO: Pod "downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7": Phase="Running", Reason="", readiness=false. Elapsed: 2.007458406s + Mar 7 03:34:07.959: INFO: Pod "downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007605251s + STEP: Saw pod success 03/07/23 03:34:07.959 + Mar 7 03:34:07.959: INFO: Pod "downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7" satisfied condition "Succeeded or Failed" + Mar 7 03:34:07.962: INFO: Trying to get logs from node node-2 pod downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7 container client-container: + STEP: delete the pod 03/07/23 03:34:07.974 + Mar 7 03:34:07.982: INFO: Waiting for pod downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7 to disappear + Mar 7 03:34:07.991: INFO: Pod downwardapi-volume-51122a6d-b006-43ca-85b6-7070804b67e7 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 03:34:07.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-7379" for this suite. 03/07/23 03:34:07.995 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:08.001 +Mar 7 03:34:08.001: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename custom-resource-definition 03/07/23 03:34:08.002 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:08.019 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:08.021 +[It] listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 +Mar 7 03:34:08.022: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:34:14.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-3242" for this suite. 03/07/23 03:34:14.568 +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","completed":178,"skipped":3018,"failed":0} +------------------------------ +• [SLOW TEST] [6.583 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:08.001 + Mar 7 03:34:08.001: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename custom-resource-definition 03/07/23 03:34:08.002 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:08.019 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:08.021 + [It] listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 + Mar 7 03:34:08.022: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:34:14.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "custom-resource-definition-3242" for this suite. 03/07/23 03:34:14.568 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:895 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:14.585 +Mar 7 03:34:14.585: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 03:34:14.586 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:14.646 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:14.648 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:895 +STEP: creating a Pod with a static label 03/07/23 03:34:14.659 +STEP: watching for Pod to be ready 03/07/23 03:34:14.665 +Mar 7 03:34:14.667: INFO: observed Pod pod-test in namespace pods-5502 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Mar 7 03:34:14.671: INFO: observed Pod pod-test in namespace pods-5502 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC }] +Mar 7 03:34:14.682: INFO: observed Pod pod-test in namespace pods-5502 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC }] +Mar 7 03:34:15.151: INFO: observed Pod pod-test in namespace pods-5502 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC }] +Mar 7 03:34:15.805: INFO: Found Pod pod-test in namespace pods-5502 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data 03/07/23 03:34:15.809 +STEP: getting the Pod and ensuring that it's patched 03/07/23 03:34:15.819 +STEP: replacing the Pod's status Ready condition to False 03/07/23 03:34:15.823 +STEP: check the Pod again to ensure its Ready conditions are False 03/07/23 03:34:15.833 +STEP: deleting the Pod via a Collection with a LabelSelector 03/07/23 03:34:15.833 +STEP: watching for the Pod to be deleted 03/07/23 03:34:15.844 +Mar 7 03:34:15.846: INFO: observed event type MODIFIED +Mar 7 03:34:17.807: INFO: observed event type MODIFIED +Mar 7 03:34:17.964: INFO: observed event type MODIFIED +Mar 7 03:34:18.827: INFO: observed event type MODIFIED +Mar 7 03:34:18.839: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +Mar 7 03:34:18.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5502" for this suite. 03/07/23 03:34:18.848 +{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","completed":179,"skipped":3044,"failed":0} +------------------------------ +• [4.268 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:895 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:14.585 + Mar 7 03:34:14.585: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 03:34:14.586 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:14.646 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:14.648 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 + [It] should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:895 + STEP: creating a Pod with a static label 03/07/23 03:34:14.659 + STEP: watching for Pod to be ready 03/07/23 03:34:14.665 + Mar 7 03:34:14.667: INFO: observed Pod pod-test in namespace pods-5502 in phase Pending with labels: map[test-pod-static:true] & conditions [] + Mar 7 03:34:14.671: INFO: observed Pod pod-test in namespace pods-5502 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC }] + Mar 7 03:34:14.682: INFO: observed Pod pod-test in namespace pods-5502 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC }] + Mar 7 03:34:15.151: INFO: observed Pod pod-test in namespace pods-5502 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC }] + Mar 7 03:34:15.805: INFO: Found Pod pod-test in namespace pods-5502 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:34:14 +0000 UTC }] + STEP: patching the Pod with a new Label and updated data 03/07/23 03:34:15.809 + STEP: getting the Pod and ensuring that it's patched 03/07/23 03:34:15.819 + STEP: replacing the Pod's status Ready condition to False 03/07/23 03:34:15.823 + STEP: check the Pod again to ensure its Ready conditions are False 03/07/23 03:34:15.833 + STEP: deleting the Pod via a Collection with a LabelSelector 03/07/23 03:34:15.833 + STEP: watching for the Pod to be deleted 03/07/23 03:34:15.844 + Mar 7 03:34:15.846: INFO: observed event type MODIFIED + Mar 7 03:34:17.807: INFO: observed event type MODIFIED + Mar 7 03:34:17.964: INFO: observed event type MODIFIED + Mar 7 03:34:18.827: INFO: observed event type MODIFIED + Mar 7 03:34:18.839: INFO: observed event type MODIFIED + [AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 + Mar 7 03:34:18.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-5502" for this suite. 03/07/23 03:34:18.848 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:680 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:18.857 +Mar 7 03:34:18.857: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 03:34:18.859 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:18.871 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:18.873 +[It] should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:680 +STEP: Creating a ResourceQuota with terminating scope 03/07/23 03:34:18.875 +STEP: Ensuring ResourceQuota status is calculated 03/07/23 03:34:18.878 +STEP: Creating a ResourceQuota with not terminating scope 03/07/23 03:34:20.882 +STEP: Ensuring ResourceQuota status is calculated 03/07/23 03:34:20.885 +STEP: Creating a long running pod 03/07/23 03:34:22.888 +STEP: Ensuring resource quota with not terminating scope captures the pod usage 03/07/23 03:34:22.938 +STEP: Ensuring resource quota with terminating scope ignored the pod usage 03/07/23 03:34:24.941 +STEP: Deleting the pod 03/07/23 03:34:26.944 +STEP: Ensuring resource quota status released the pod usage 03/07/23 03:34:26.955 +STEP: Creating a terminating pod 03/07/23 03:34:28.959 +STEP: Ensuring resource quota with terminating scope captures the pod usage 03/07/23 03:34:28.967 +STEP: Ensuring resource quota with not terminating scope ignored the pod usage 03/07/23 03:34:30.971 +STEP: Deleting the pod 03/07/23 03:34:32.974 +STEP: Ensuring resource quota status released the pod usage 03/07/23 03:34:33.01 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 03:34:35.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7303" for this suite. 03/07/23 03:34:35.017 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","completed":180,"skipped":3127,"failed":0} +------------------------------ +• [SLOW TEST] [16.199 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:680 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:18.857 + Mar 7 03:34:18.857: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 03:34:18.859 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:18.871 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:18.873 + [It] should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:680 + STEP: Creating a ResourceQuota with terminating scope 03/07/23 03:34:18.875 + STEP: Ensuring ResourceQuota status is calculated 03/07/23 03:34:18.878 + STEP: Creating a ResourceQuota with not terminating scope 03/07/23 03:34:20.882 + STEP: Ensuring ResourceQuota status is calculated 03/07/23 03:34:20.885 + STEP: Creating a long running pod 03/07/23 03:34:22.888 + STEP: Ensuring resource quota with not terminating scope captures the pod usage 03/07/23 03:34:22.938 + STEP: Ensuring resource quota with terminating scope ignored the pod usage 03/07/23 03:34:24.941 + STEP: Deleting the pod 03/07/23 03:34:26.944 + STEP: Ensuring resource quota status released the pod usage 03/07/23 03:34:26.955 + STEP: Creating a terminating pod 03/07/23 03:34:28.959 + STEP: Ensuring resource quota with terminating scope captures the pod usage 03/07/23 03:34:28.967 + STEP: Ensuring resource quota with not terminating scope ignored the pod usage 03/07/23 03:34:30.971 + STEP: Deleting the pod 03/07/23 03:34:32.974 + STEP: Ensuring resource quota status released the pod usage 03/07/23 03:34:33.01 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 03:34:35.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-7303" for this suite. 03/07/23 03:34:35.017 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:132 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:35.057 +Mar 7 03:34:35.057: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename security-context 03/07/23 03:34:35.058 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:35.072 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:35.074 +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:132 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 03/07/23 03:34:35.076 +Mar 7 03:34:35.081: INFO: Waiting up to 5m0s for pod "security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c" in namespace "security-context-5682" to be "Succeeded or Failed" +Mar 7 03:34:35.086: INFO: Pod "security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722576ms +Mar 7 03:34:37.089: INFO: Pod "security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c": Phase="Running", Reason="", readiness=false. Elapsed: 2.007615872s +Mar 7 03:34:39.090: INFO: Pod "security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008244551s +STEP: Saw pod success 03/07/23 03:34:39.09 +Mar 7 03:34:39.090: INFO: Pod "security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c" satisfied condition "Succeeded or Failed" +Mar 7 03:34:39.093: INFO: Trying to get logs from node node-2 pod security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c container test-container: +STEP: delete the pod 03/07/23 03:34:39.098 +Mar 7 03:34:39.108: INFO: Waiting for pod security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c to disappear +Mar 7 03:34:39.111: INFO: Pod security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c no longer exists +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +Mar 7 03:34:39.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-5682" for this suite. 03/07/23 03:34:39.114 +{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","completed":181,"skipped":3129,"failed":0} +------------------------------ +• [4.063 seconds] +[sig-node] Security Context +test/e2e/node/framework.go:23 + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:132 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:35.057 + Mar 7 03:34:35.057: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename security-context 03/07/23 03:34:35.058 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:35.072 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:35.074 + [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:132 + STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 03/07/23 03:34:35.076 + Mar 7 03:34:35.081: INFO: Waiting up to 5m0s for pod "security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c" in namespace "security-context-5682" to be "Succeeded or Failed" + Mar 7 03:34:35.086: INFO: Pod "security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722576ms + Mar 7 03:34:37.089: INFO: Pod "security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c": Phase="Running", Reason="", readiness=false. Elapsed: 2.007615872s + Mar 7 03:34:39.090: INFO: Pod "security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008244551s + STEP: Saw pod success 03/07/23 03:34:39.09 + Mar 7 03:34:39.090: INFO: Pod "security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c" satisfied condition "Succeeded or Failed" + Mar 7 03:34:39.093: INFO: Trying to get logs from node node-2 pod security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c container test-container: + STEP: delete the pod 03/07/23 03:34:39.098 + Mar 7 03:34:39.108: INFO: Waiting for pod security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c to disappear + Mar 7 03:34:39.111: INFO: Pod security-context-c59a4efd-a37a-4113-836d-8cdbc289db7c no longer exists + [AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 + Mar 7 03:34:39.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "security-context-5682" for this suite. 03/07/23 03:34:39.114 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:176 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:39.12 +Mar 7 03:34:39.120: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename init-container 03/07/23 03:34:39.121 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:39.135 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:39.138 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 +[It] should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:176 +STEP: creating the pod 03/07/23 03:34:39.139 +Mar 7 03:34:39.140: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 +Mar 7 03:34:44.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-7953" for this suite. 03/07/23 03:34:44.9 +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","completed":182,"skipped":3146,"failed":0} +------------------------------ +• [SLOW TEST] [5.785 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:176 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:39.12 + Mar 7 03:34:39.120: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename init-container 03/07/23 03:34:39.121 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:39.135 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:39.138 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 + [It] should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:176 + STEP: creating the pod 03/07/23 03:34:39.139 + Mar 7 03:34:39.140: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 + Mar 7 03:34:44.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "init-container-7953" for this suite. 03/07/23 03:34:44.9 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:527 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:44.906 +Mar 7 03:34:44.906: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename security-context-test 03/07/23 03:34:44.907 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:44.919 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:44.922 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:49 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:527 +Mar 7 03:34:44.929: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e" in namespace "security-context-test-8210" to be "Succeeded or Failed" +Mar 7 03:34:44.931: INFO: Pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191957ms +Mar 7 03:34:46.939: INFO: Pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009533901s +Mar 7 03:34:48.935: INFO: Pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00600514s +Mar 7 03:34:48.935: INFO: Pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e" satisfied condition "Succeeded or Failed" +Mar 7 03:34:48.941: INFO: Got logs for pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +Mar 7 03:34:48.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-8210" for this suite. 03/07/23 03:34:48.945 +{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","completed":183,"skipped":3165,"failed":0} +------------------------------ +• [4.045 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a pod with privileged + test/e2e/common/node/security_context.go:490 + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:527 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:44.906 + Mar 7 03:34:44.906: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename security-context-test 03/07/23 03:34:44.907 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:44.919 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:44.922 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:49 + [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:527 + Mar 7 03:34:44.929: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e" in namespace "security-context-test-8210" to be "Succeeded or Failed" + Mar 7 03:34:44.931: INFO: Pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191957ms + Mar 7 03:34:46.939: INFO: Pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009533901s + Mar 7 03:34:48.935: INFO: Pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00600514s + Mar 7 03:34:48.935: INFO: Pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e" satisfied condition "Succeeded or Failed" + Mar 7 03:34:48.941: INFO: Got logs for pod "busybox-privileged-false-d227c645-afc8-4d43-b3af-0a61c1b6cb9e": "ip: RTNETLINK answers: Operation not permitted\n" + [AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 + Mar 7 03:34:48.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "security-context-test-8210" for this suite. 03/07/23 03:34:48.945 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:350 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:34:48.952 +Mar 7 03:34:48.952: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:34:48.953 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:48.965 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:48.967 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:324 +[It] should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:350 +STEP: creating a replication controller 03/07/23 03:34:48.969 +Mar 7 03:34:48.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 create -f -' +Mar 7 03:34:50.110: INFO: stderr: "" +Mar 7 03:34:50.110: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. 03/07/23 03:34:50.11 +Mar 7 03:34:50.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Mar 7 03:34:50.299: INFO: stderr: "" +Mar 7 03:34:50.299: INFO: stdout: "update-demo-nautilus-nz56x update-demo-nautilus-zlwvf " +Mar 7 03:34:50.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Mar 7 03:34:50.466: INFO: stderr: "" +Mar 7 03:34:50.466: INFO: stdout: "" +Mar 7 03:34:50.466: INFO: update-demo-nautilus-nz56x is created but not running +Mar 7 03:34:55.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Mar 7 03:34:55.624: INFO: stderr: "" +Mar 7 03:34:55.624: INFO: stdout: "update-demo-nautilus-nz56x update-demo-nautilus-zlwvf " +Mar 7 03:34:55.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Mar 7 03:34:55.783: INFO: stderr: "" +Mar 7 03:34:55.783: INFO: stdout: "true" +Mar 7 03:34:55.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Mar 7 03:34:55.944: INFO: stderr: "" +Mar 7 03:34:55.945: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" +Mar 7 03:34:55.945: INFO: validating pod update-demo-nautilus-nz56x +Mar 7 03:34:55.949: INFO: got data: { + "image": "nautilus.jpg" +} + +Mar 7 03:34:55.949: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Mar 7 03:34:55.949: INFO: update-demo-nautilus-nz56x is verified up and running +Mar 7 03:34:55.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-zlwvf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Mar 7 03:34:56.131: INFO: stderr: "" +Mar 7 03:34:56.131: INFO: stdout: "true" +Mar 7 03:34:56.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-zlwvf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Mar 7 03:34:56.294: INFO: stderr: "" +Mar 7 03:34:56.294: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" +Mar 7 03:34:56.294: INFO: validating pod update-demo-nautilus-zlwvf +Mar 7 03:34:56.298: INFO: got data: { + "image": "nautilus.jpg" +} + +Mar 7 03:34:56.298: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Mar 7 03:34:56.298: INFO: update-demo-nautilus-zlwvf is verified up and running +STEP: scaling down the replication controller 03/07/23 03:34:56.298 +Mar 7 03:34:56.300: INFO: scanned /root for discovery docs: +Mar 7 03:34:56.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Mar 7 03:34:56.529: INFO: stderr: "" +Mar 7 03:34:56.529: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. 03/07/23 03:34:56.529 +Mar 7 03:34:56.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Mar 7 03:34:56.709: INFO: stderr: "" +Mar 7 03:34:56.709: INFO: stdout: "update-demo-nautilus-nz56x update-demo-nautilus-zlwvf " +STEP: Replicas for name=update-demo: expected=1 actual=2 03/07/23 03:34:56.709 +Mar 7 03:35:01.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Mar 7 03:35:01.886: INFO: stderr: "" +Mar 7 03:35:01.886: INFO: stdout: "update-demo-nautilus-nz56x " +Mar 7 03:35:01.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Mar 7 03:35:02.064: INFO: stderr: "" +Mar 7 03:35:02.064: INFO: stdout: "true" +Mar 7 03:35:02.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Mar 7 03:35:02.228: INFO: stderr: "" +Mar 7 03:35:02.228: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" +Mar 7 03:35:02.228: INFO: validating pod update-demo-nautilus-nz56x +Mar 7 03:35:02.231: INFO: got data: { + "image": "nautilus.jpg" +} + +Mar 7 03:35:02.231: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Mar 7 03:35:02.231: INFO: update-demo-nautilus-nz56x is verified up and running +STEP: scaling up the replication controller 03/07/23 03:35:02.231 +Mar 7 03:35:02.232: INFO: scanned /root for discovery docs: +Mar 7 03:35:02.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Mar 7 03:35:02.456: INFO: stderr: "" +Mar 7 03:35:02.456: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. 03/07/23 03:35:02.456 +Mar 7 03:35:02.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Mar 7 03:35:02.639: INFO: stderr: "" +Mar 7 03:35:02.639: INFO: stdout: "update-demo-nautilus-87ct9 update-demo-nautilus-nz56x " +Mar 7 03:35:02.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-87ct9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Mar 7 03:35:02.798: INFO: stderr: "" +Mar 7 03:35:02.798: INFO: stdout: "" +Mar 7 03:35:02.798: INFO: update-demo-nautilus-87ct9 is created but not running +Mar 7 03:35:07.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Mar 7 03:35:07.972: INFO: stderr: "" +Mar 7 03:35:07.972: INFO: stdout: "update-demo-nautilus-87ct9 update-demo-nautilus-nz56x " +Mar 7 03:35:07.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-87ct9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Mar 7 03:35:08.133: INFO: stderr: "" +Mar 7 03:35:08.133: INFO: stdout: "true" +Mar 7 03:35:08.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-87ct9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Mar 7 03:35:08.299: INFO: stderr: "" +Mar 7 03:35:08.299: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" +Mar 7 03:35:08.299: INFO: validating pod update-demo-nautilus-87ct9 +Mar 7 03:35:08.302: INFO: got data: { + "image": "nautilus.jpg" +} + +Mar 7 03:35:08.302: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Mar 7 03:35:08.302: INFO: update-demo-nautilus-87ct9 is verified up and running +Mar 7 03:35:08.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Mar 7 03:35:08.468: INFO: stderr: "" +Mar 7 03:35:08.468: INFO: stdout: "true" +Mar 7 03:35:08.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Mar 7 03:35:08.631: INFO: stderr: "" +Mar 7 03:35:08.631: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" +Mar 7 03:35:08.631: INFO: validating pod update-demo-nautilus-nz56x +Mar 7 03:35:08.634: INFO: got data: { + "image": "nautilus.jpg" +} + +Mar 7 03:35:08.635: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Mar 7 03:35:08.635: INFO: update-demo-nautilus-nz56x is verified up and running +STEP: using delete to clean up resources 03/07/23 03:35:08.635 +Mar 7 03:35:08.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 delete --grace-period=0 --force -f -' +Mar 7 03:35:08.726: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Mar 7 03:35:08.726: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Mar 7 03:35:08.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get rc,svc -l name=update-demo --no-headers' +Mar 7 03:35:09.030: INFO: stderr: "No resources found in kubectl-7064 namespace.\n" +Mar 7 03:35:09.030: INFO: stdout: "" +Mar 7 03:35:09.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Mar 7 03:35:09.238: INFO: stderr: "" +Mar 7 03:35:09.238: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:35:09.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7064" for this suite. 03/07/23 03:35:09.244 +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","completed":184,"skipped":3171,"failed":0} +------------------------------ +• [SLOW TEST] [20.299 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Update Demo + test/e2e/kubectl/kubectl.go:322 + should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:350 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:34:48.952 + Mar 7 03:34:48.952: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:34:48.953 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:34:48.965 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:34:48.967 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:324 + [It] should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:350 + STEP: creating a replication controller 03/07/23 03:34:48.969 + Mar 7 03:34:48.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 create -f -' + Mar 7 03:34:50.110: INFO: stderr: "" + Mar 7 03:34:50.110: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" + STEP: waiting for all containers in name=update-demo pods to come up. 03/07/23 03:34:50.11 + Mar 7 03:34:50.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Mar 7 03:34:50.299: INFO: stderr: "" + Mar 7 03:34:50.299: INFO: stdout: "update-demo-nautilus-nz56x update-demo-nautilus-zlwvf " + Mar 7 03:34:50.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Mar 7 03:34:50.466: INFO: stderr: "" + Mar 7 03:34:50.466: INFO: stdout: "" + Mar 7 03:34:50.466: INFO: update-demo-nautilus-nz56x is created but not running + Mar 7 03:34:55.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Mar 7 03:34:55.624: INFO: stderr: "" + Mar 7 03:34:55.624: INFO: stdout: "update-demo-nautilus-nz56x update-demo-nautilus-zlwvf " + Mar 7 03:34:55.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Mar 7 03:34:55.783: INFO: stderr: "" + Mar 7 03:34:55.783: INFO: stdout: "true" + Mar 7 03:34:55.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Mar 7 03:34:55.944: INFO: stderr: "" + Mar 7 03:34:55.945: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" + Mar 7 03:34:55.945: INFO: validating pod update-demo-nautilus-nz56x + Mar 7 03:34:55.949: INFO: got data: { + "image": "nautilus.jpg" + } + + Mar 7 03:34:55.949: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Mar 7 03:34:55.949: INFO: update-demo-nautilus-nz56x is verified up and running + Mar 7 03:34:55.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-zlwvf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Mar 7 03:34:56.131: INFO: stderr: "" + Mar 7 03:34:56.131: INFO: stdout: "true" + Mar 7 03:34:56.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-zlwvf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Mar 7 03:34:56.294: INFO: stderr: "" + Mar 7 03:34:56.294: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" + Mar 7 03:34:56.294: INFO: validating pod update-demo-nautilus-zlwvf + Mar 7 03:34:56.298: INFO: got data: { + "image": "nautilus.jpg" + } + + Mar 7 03:34:56.298: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Mar 7 03:34:56.298: INFO: update-demo-nautilus-zlwvf is verified up and running + STEP: scaling down the replication controller 03/07/23 03:34:56.298 + Mar 7 03:34:56.300: INFO: scanned /root for discovery docs: + Mar 7 03:34:56.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 scale rc update-demo-nautilus --replicas=1 --timeout=5m' + Mar 7 03:34:56.529: INFO: stderr: "" + Mar 7 03:34:56.529: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" + STEP: waiting for all containers in name=update-demo pods to come up. 03/07/23 03:34:56.529 + Mar 7 03:34:56.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Mar 7 03:34:56.709: INFO: stderr: "" + Mar 7 03:34:56.709: INFO: stdout: "update-demo-nautilus-nz56x update-demo-nautilus-zlwvf " + STEP: Replicas for name=update-demo: expected=1 actual=2 03/07/23 03:34:56.709 + Mar 7 03:35:01.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Mar 7 03:35:01.886: INFO: stderr: "" + Mar 7 03:35:01.886: INFO: stdout: "update-demo-nautilus-nz56x " + Mar 7 03:35:01.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Mar 7 03:35:02.064: INFO: stderr: "" + Mar 7 03:35:02.064: INFO: stdout: "true" + Mar 7 03:35:02.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Mar 7 03:35:02.228: INFO: stderr: "" + Mar 7 03:35:02.228: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" + Mar 7 03:35:02.228: INFO: validating pod update-demo-nautilus-nz56x + Mar 7 03:35:02.231: INFO: got data: { + "image": "nautilus.jpg" + } + + Mar 7 03:35:02.231: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Mar 7 03:35:02.231: INFO: update-demo-nautilus-nz56x is verified up and running + STEP: scaling up the replication controller 03/07/23 03:35:02.231 + Mar 7 03:35:02.232: INFO: scanned /root for discovery docs: + Mar 7 03:35:02.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 scale rc update-demo-nautilus --replicas=2 --timeout=5m' + Mar 7 03:35:02.456: INFO: stderr: "" + Mar 7 03:35:02.456: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" + STEP: waiting for all containers in name=update-demo pods to come up. 03/07/23 03:35:02.456 + Mar 7 03:35:02.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Mar 7 03:35:02.639: INFO: stderr: "" + Mar 7 03:35:02.639: INFO: stdout: "update-demo-nautilus-87ct9 update-demo-nautilus-nz56x " + Mar 7 03:35:02.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-87ct9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Mar 7 03:35:02.798: INFO: stderr: "" + Mar 7 03:35:02.798: INFO: stdout: "" + Mar 7 03:35:02.798: INFO: update-demo-nautilus-87ct9 is created but not running + Mar 7 03:35:07.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Mar 7 03:35:07.972: INFO: stderr: "" + Mar 7 03:35:07.972: INFO: stdout: "update-demo-nautilus-87ct9 update-demo-nautilus-nz56x " + Mar 7 03:35:07.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-87ct9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Mar 7 03:35:08.133: INFO: stderr: "" + Mar 7 03:35:08.133: INFO: stdout: "true" + Mar 7 03:35:08.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-87ct9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Mar 7 03:35:08.299: INFO: stderr: "" + Mar 7 03:35:08.299: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" + Mar 7 03:35:08.299: INFO: validating pod update-demo-nautilus-87ct9 + Mar 7 03:35:08.302: INFO: got data: { + "image": "nautilus.jpg" + } + + Mar 7 03:35:08.302: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Mar 7 03:35:08.302: INFO: update-demo-nautilus-87ct9 is verified up and running + Mar 7 03:35:08.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Mar 7 03:35:08.468: INFO: stderr: "" + Mar 7 03:35:08.468: INFO: stdout: "true" + Mar 7 03:35:08.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods update-demo-nautilus-nz56x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Mar 7 03:35:08.631: INFO: stderr: "" + Mar 7 03:35:08.631: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" + Mar 7 03:35:08.631: INFO: validating pod update-demo-nautilus-nz56x + Mar 7 03:35:08.634: INFO: got data: { + "image": "nautilus.jpg" + } + + Mar 7 03:35:08.635: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Mar 7 03:35:08.635: INFO: update-demo-nautilus-nz56x is verified up and running + STEP: using delete to clean up resources 03/07/23 03:35:08.635 + Mar 7 03:35:08.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 delete --grace-period=0 --force -f -' + Mar 7 03:35:08.726: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Mar 7 03:35:08.726: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" + Mar 7 03:35:08.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get rc,svc -l name=update-demo --no-headers' + Mar 7 03:35:09.030: INFO: stderr: "No resources found in kubectl-7064 namespace.\n" + Mar 7 03:35:09.030: INFO: stdout: "" + Mar 7 03:35:09.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7064 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Mar 7 03:35:09.238: INFO: stderr: "" + Mar 7 03:35:09.238: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:35:09.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-7064" for this suite. 03/07/23 03:35:09.244 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:335 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:35:09.251 +Mar 7 03:35:09.251: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename job 03/07/23 03:35:09.252 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:09.272 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:09.274 +[It] should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:335 +STEP: Creating a job 03/07/23 03:35:09.278 +STEP: Ensuring active pods == parallelism 03/07/23 03:35:09.282 +STEP: Orphaning one of the Job's Pods 03/07/23 03:35:11.285 +Mar 7 03:35:11.798: INFO: Successfully updated pod "adopt-release-ctddx" +STEP: Checking that the Job readopts the Pod 03/07/23 03:35:11.798 +Mar 7 03:35:11.798: INFO: Waiting up to 15m0s for pod "adopt-release-ctddx" in namespace "job-6522" to be "adopted" +Mar 7 03:35:11.802: INFO: Pod "adopt-release-ctddx": Phase="Running", Reason="", readiness=true. Elapsed: 3.946736ms +Mar 7 03:35:13.806: INFO: Pod "adopt-release-ctddx": Phase="Running", Reason="", readiness=true. Elapsed: 2.007330847s +Mar 7 03:35:13.806: INFO: Pod "adopt-release-ctddx" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod 03/07/23 03:35:13.806 +Mar 7 03:35:14.315: INFO: Successfully updated pod "adopt-release-ctddx" +STEP: Checking that the Job releases the Pod 03/07/23 03:35:14.315 +Mar 7 03:35:14.315: INFO: Waiting up to 15m0s for pod "adopt-release-ctddx" in namespace "job-6522" to be "released" +Mar 7 03:35:14.319: INFO: Pod "adopt-release-ctddx": Phase="Running", Reason="", readiness=true. Elapsed: 3.957564ms +Mar 7 03:35:16.323: INFO: Pod "adopt-release-ctddx": Phase="Running", Reason="", readiness=true. Elapsed: 2.00737088s +Mar 7 03:35:16.323: INFO: Pod "adopt-release-ctddx" satisfied condition "released" +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +Mar 7 03:35:16.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-6522" for this suite. 03/07/23 03:35:16.326 +{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","completed":185,"skipped":3175,"failed":0} +------------------------------ +• [SLOW TEST] [7.079 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:335 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:35:09.251 + Mar 7 03:35:09.251: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename job 03/07/23 03:35:09.252 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:09.272 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:09.274 + [It] should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:335 + STEP: Creating a job 03/07/23 03:35:09.278 + STEP: Ensuring active pods == parallelism 03/07/23 03:35:09.282 + STEP: Orphaning one of the Job's Pods 03/07/23 03:35:11.285 + Mar 7 03:35:11.798: INFO: Successfully updated pod "adopt-release-ctddx" + STEP: Checking that the Job readopts the Pod 03/07/23 03:35:11.798 + Mar 7 03:35:11.798: INFO: Waiting up to 15m0s for pod "adopt-release-ctddx" in namespace "job-6522" to be "adopted" + Mar 7 03:35:11.802: INFO: Pod "adopt-release-ctddx": Phase="Running", Reason="", readiness=true. Elapsed: 3.946736ms + Mar 7 03:35:13.806: INFO: Pod "adopt-release-ctddx": Phase="Running", Reason="", readiness=true. Elapsed: 2.007330847s + Mar 7 03:35:13.806: INFO: Pod "adopt-release-ctddx" satisfied condition "adopted" + STEP: Removing the labels from the Job's Pod 03/07/23 03:35:13.806 + Mar 7 03:35:14.315: INFO: Successfully updated pod "adopt-release-ctddx" + STEP: Checking that the Job releases the Pod 03/07/23 03:35:14.315 + Mar 7 03:35:14.315: INFO: Waiting up to 15m0s for pod "adopt-release-ctddx" in namespace "job-6522" to be "released" + Mar 7 03:35:14.319: INFO: Pod "adopt-release-ctddx": Phase="Running", Reason="", readiness=true. Elapsed: 3.957564ms + Mar 7 03:35:16.323: INFO: Pod "adopt-release-ctddx": Phase="Running", Reason="", readiness=true. Elapsed: 2.00737088s + Mar 7 03:35:16.323: INFO: Pod "adopt-release-ctddx" satisfied condition "released" + [AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 + Mar 7 03:35:16.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "job-6522" for this suite. 03/07/23 03:35:16.326 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:67 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:35:16.333 +Mar 7 03:35:16.333: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:35:16.336 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:16.349 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:16.351 +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:67 +STEP: Creating secret with name secret-test-ce9b2d46-7aa2-4f41-92a9-93156c96abb4 03/07/23 03:35:16.353 +STEP: Creating a pod to test consume secrets 03/07/23 03:35:16.356 +Mar 7 03:35:16.362: INFO: Waiting up to 5m0s for pod "pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30" in namespace "secrets-5802" to be "Succeeded or Failed" +Mar 7 03:35:16.365: INFO: Pod "pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.047415ms +Mar 7 03:35:18.369: INFO: Pod "pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006990119s +Mar 7 03:35:20.369: INFO: Pod "pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007218171s +STEP: Saw pod success 03/07/23 03:35:20.369 +Mar 7 03:35:20.369: INFO: Pod "pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30" satisfied condition "Succeeded or Failed" +Mar 7 03:35:20.371: INFO: Trying to get logs from node node-2 pod pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30 container secret-volume-test: +STEP: delete the pod 03/07/23 03:35:20.376 +Mar 7 03:35:20.389: INFO: Waiting for pod pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30 to disappear +Mar 7 03:35:20.391: INFO: Pod pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:35:20.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5802" for this suite. 03/07/23 03:35:20.395 +{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","completed":186,"skipped":3221,"failed":0} +------------------------------ +• [4.067 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:35:16.333 + Mar 7 03:35:16.333: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:35:16.336 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:16.349 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:16.351 + [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:67 + STEP: Creating secret with name secret-test-ce9b2d46-7aa2-4f41-92a9-93156c96abb4 03/07/23 03:35:16.353 + STEP: Creating a pod to test consume secrets 03/07/23 03:35:16.356 + Mar 7 03:35:16.362: INFO: Waiting up to 5m0s for pod "pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30" in namespace "secrets-5802" to be "Succeeded or Failed" + Mar 7 03:35:16.365: INFO: Pod "pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.047415ms + Mar 7 03:35:18.369: INFO: Pod "pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006990119s + Mar 7 03:35:20.369: INFO: Pod "pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007218171s + STEP: Saw pod success 03/07/23 03:35:20.369 + Mar 7 03:35:20.369: INFO: Pod "pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30" satisfied condition "Succeeded or Failed" + Mar 7 03:35:20.371: INFO: Trying to get logs from node node-2 pod pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30 container secret-volume-test: + STEP: delete the pod 03/07/23 03:35:20.376 + Mar 7 03:35:20.389: INFO: Waiting for pod pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30 to disappear + Mar 7 03:35:20.391: INFO: Pod pod-secrets-fbd47ec7-0506-4cbf-b10c-694650dffd30 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:35:20.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-5802" for this suite. 03/07/23 03:35:20.395 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:791 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:35:20.4 +Mar 7 03:35:20.400: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:35:20.401 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:20.416 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:20.419 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:791 +STEP: creating service endpoint-test2 in namespace services-6351 03/07/23 03:35:20.42 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6351 to expose endpoints map[] 03/07/23 03:35:20.435 +Mar 7 03:35:20.439: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found +Mar 7 03:35:21.443: INFO: successfully validated that service endpoint-test2 in namespace services-6351 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-6351 03/07/23 03:35:21.443 +Mar 7 03:35:21.450: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-6351" to be "running and ready" +Mar 7 03:35:21.452: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393447ms +Mar 7 03:35:21.452: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:35:23.455: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.005557791s +Mar 7 03:35:23.455: INFO: The phase of Pod pod1 is Running (Ready = true) +Mar 7 03:35:23.455: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6351 to expose endpoints map[pod1:[80]] 03/07/23 03:35:23.458 +Mar 7 03:35:23.465: INFO: successfully validated that service endpoint-test2 in namespace services-6351 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 03/07/23 03:35:23.465 +Mar 7 03:35:23.465: INFO: Creating new exec pod +Mar 7 03:35:23.470: INFO: Waiting up to 5m0s for pod "execpodsbb28" in namespace "services-6351" to be "running" +Mar 7 03:35:23.472: INFO: Pod "execpodsbb28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234263ms +Mar 7 03:35:25.475: INFO: Pod "execpodsbb28": Phase="Running", Reason="", readiness=true. Elapsed: 2.005137788s +Mar 7 03:35:25.475: INFO: Pod "execpodsbb28" satisfied condition "running" +Mar 7 03:35:26.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Mar 7 03:35:26.661: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Mar 7 03:35:26.661: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:35:26.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.14.155 80' +Mar 7 03:35:26.852: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.14.155 80\nConnection to 10.96.14.155 80 port [tcp/http] succeeded!\n" +Mar 7 03:35:26.852: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Creating pod pod2 in namespace services-6351 03/07/23 03:35:26.852 +Mar 7 03:35:26.856: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-6351" to be "running and ready" +Mar 7 03:35:26.860: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009851ms +Mar 7 03:35:26.860: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:35:28.864: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007943815s +Mar 7 03:35:28.864: INFO: The phase of Pod pod2 is Running (Ready = true) +Mar 7 03:35:28.864: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6351 to expose endpoints map[pod1:[80] pod2:[80]] 03/07/23 03:35:28.867 +Mar 7 03:35:28.877: INFO: successfully validated that service endpoint-test2 in namespace services-6351 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 03/07/23 03:35:28.877 +Mar 7 03:35:29.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Mar 7 03:35:30.060: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Mar 7 03:35:30.060: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:35:30.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.14.155 80' +Mar 7 03:35:30.234: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.14.155 80\nConnection to 10.96.14.155 80 port [tcp/http] succeeded!\n" +Mar 7 03:35:30.234: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-6351 03/07/23 03:35:30.234 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6351 to expose endpoints map[pod2:[80]] 03/07/23 03:35:30.254 +Mar 7 03:35:30.277: INFO: successfully validated that service endpoint-test2 in namespace services-6351 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 03/07/23 03:35:30.277 +Mar 7 03:35:31.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Mar 7 03:35:31.470: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Mar 7 03:35:31.470: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:35:31.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.14.155 80' +Mar 7 03:35:31.649: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.14.155 80\nConnection to 10.96.14.155 80 port [tcp/http] succeeded!\n" +Mar 7 03:35:31.649: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod2 in namespace services-6351 03/07/23 03:35:31.649 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6351 to expose endpoints map[] 03/07/23 03:35:31.671 +Mar 7 03:35:31.683: INFO: successfully validated that service endpoint-test2 in namespace services-6351 exposes endpoints map[] +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:35:31.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6351" for this suite. 03/07/23 03:35:31.709 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","completed":187,"skipped":3222,"failed":0} +------------------------------ +• [SLOW TEST] [11.313 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:791 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:35:20.4 + Mar 7 03:35:20.400: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:35:20.401 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:20.416 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:20.419 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:791 + STEP: creating service endpoint-test2 in namespace services-6351 03/07/23 03:35:20.42 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6351 to expose endpoints map[] 03/07/23 03:35:20.435 + Mar 7 03:35:20.439: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found + Mar 7 03:35:21.443: INFO: successfully validated that service endpoint-test2 in namespace services-6351 exposes endpoints map[] + STEP: Creating pod pod1 in namespace services-6351 03/07/23 03:35:21.443 + Mar 7 03:35:21.450: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-6351" to be "running and ready" + Mar 7 03:35:21.452: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393447ms + Mar 7 03:35:21.452: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:35:23.455: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.005557791s + Mar 7 03:35:23.455: INFO: The phase of Pod pod1 is Running (Ready = true) + Mar 7 03:35:23.455: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6351 to expose endpoints map[pod1:[80]] 03/07/23 03:35:23.458 + Mar 7 03:35:23.465: INFO: successfully validated that service endpoint-test2 in namespace services-6351 exposes endpoints map[pod1:[80]] + STEP: Checking if the Service forwards traffic to pod1 03/07/23 03:35:23.465 + Mar 7 03:35:23.465: INFO: Creating new exec pod + Mar 7 03:35:23.470: INFO: Waiting up to 5m0s for pod "execpodsbb28" in namespace "services-6351" to be "running" + Mar 7 03:35:23.472: INFO: Pod "execpodsbb28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234263ms + Mar 7 03:35:25.475: INFO: Pod "execpodsbb28": Phase="Running", Reason="", readiness=true. Elapsed: 2.005137788s + Mar 7 03:35:25.475: INFO: Pod "execpodsbb28" satisfied condition "running" + Mar 7 03:35:26.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' + Mar 7 03:35:26.661: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Mar 7 03:35:26.661: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:35:26.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.14.155 80' + Mar 7 03:35:26.852: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.14.155 80\nConnection to 10.96.14.155 80 port [tcp/http] succeeded!\n" + Mar 7 03:35:26.852: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + STEP: Creating pod pod2 in namespace services-6351 03/07/23 03:35:26.852 + Mar 7 03:35:26.856: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-6351" to be "running and ready" + Mar 7 03:35:26.860: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009851ms + Mar 7 03:35:26.860: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:35:28.864: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007943815s + Mar 7 03:35:28.864: INFO: The phase of Pod pod2 is Running (Ready = true) + Mar 7 03:35:28.864: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6351 to expose endpoints map[pod1:[80] pod2:[80]] 03/07/23 03:35:28.867 + Mar 7 03:35:28.877: INFO: successfully validated that service endpoint-test2 in namespace services-6351 exposes endpoints map[pod1:[80] pod2:[80]] + STEP: Checking if the Service forwards traffic to pod1 and pod2 03/07/23 03:35:28.877 + Mar 7 03:35:29.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' + Mar 7 03:35:30.060: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Mar 7 03:35:30.060: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:35:30.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.14.155 80' + Mar 7 03:35:30.234: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.14.155 80\nConnection to 10.96.14.155 80 port [tcp/http] succeeded!\n" + Mar 7 03:35:30.234: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + STEP: Deleting pod pod1 in namespace services-6351 03/07/23 03:35:30.234 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6351 to expose endpoints map[pod2:[80]] 03/07/23 03:35:30.254 + Mar 7 03:35:30.277: INFO: successfully validated that service endpoint-test2 in namespace services-6351 exposes endpoints map[pod2:[80]] + STEP: Checking if the Service forwards traffic to pod2 03/07/23 03:35:30.277 + Mar 7 03:35:31.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' + Mar 7 03:35:31.470: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Mar 7 03:35:31.470: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:35:31.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6351 exec execpodsbb28 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.14.155 80' + Mar 7 03:35:31.649: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.14.155 80\nConnection to 10.96.14.155 80 port [tcp/http] succeeded!\n" + Mar 7 03:35:31.649: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + STEP: Deleting pod pod2 in namespace services-6351 03/07/23 03:35:31.649 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6351 to expose endpoints map[] 03/07/23 03:35:31.671 + Mar 7 03:35:31.683: INFO: successfully validated that service endpoint-test2 in namespace services-6351 exposes endpoints map[] + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:35:31.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-6351" for this suite. 03/07/23 03:35:31.709 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1683 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:35:31.713 +Mar 7 03:35:31.713: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:35:31.714 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:31.738 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:31.74 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1683 +Mar 7 03:35:31.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-3035 version' +Mar 7 03:35:31.789: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" +Mar 7 03:35:31.789: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"25\", GitVersion:\"v1.25.5\", GitCommit:\"804d6167111f6858541cef440ccc53887fbbc96a\", GitTreeState:\"clean\", BuildDate:\"2022-12-08T10:15:02Z\", GoVersion:\"go1.19.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.7\nServer Version: version.Info{Major:\"1\", Minor:\"25\", GitVersion:\"v1.25.5\", GitCommit:\"804d6167111f6858541cef440ccc53887fbbc96a\", GitTreeState:\"clean\", BuildDate:\"2022-12-08T10:08:09Z\", GoVersion:\"go1.19.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:35:31.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3035" for this suite. 03/07/23 03:35:31.793 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","completed":188,"skipped":3224,"failed":0} +------------------------------ +• [0.085 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl version + test/e2e/kubectl/kubectl.go:1677 + should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1683 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:35:31.713 + Mar 7 03:35:31.713: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:35:31.714 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:31.738 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:31.74 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1683 + Mar 7 03:35:31.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-3035 version' + Mar 7 03:35:31.789: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" + Mar 7 03:35:31.789: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"25\", GitVersion:\"v1.25.5\", GitCommit:\"804d6167111f6858541cef440ccc53887fbbc96a\", GitTreeState:\"clean\", BuildDate:\"2022-12-08T10:15:02Z\", GoVersion:\"go1.19.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.7\nServer Version: version.Info{Major:\"1\", Minor:\"25\", GitVersion:\"v1.25.5\", GitCommit:\"804d6167111f6858541cef440ccc53887fbbc96a\", GitTreeState:\"clean\", BuildDate:\"2022-12-08T10:08:09Z\", GoVersion:\"go1.19.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:35:31.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-3035" for this suite. 03/07/23 03:35:31.793 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:260 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:35:31.798 +Mar 7 03:35:31.798: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:35:31.799 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:31.811 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:31.813 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:260 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:35:31.815 +Mar 7 03:35:31.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ddaea553-bccf-460d-844d-080880050310" in namespace "downward-api-2615" to be "Succeeded or Failed" +Mar 7 03:35:31.828: INFO: Pod "downwardapi-volume-ddaea553-bccf-460d-844d-080880050310": Phase="Pending", Reason="", readiness=false. Elapsed: 3.937102ms +Mar 7 03:35:33.831: INFO: Pod "downwardapi-volume-ddaea553-bccf-460d-844d-080880050310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007718791s +Mar 7 03:35:35.831: INFO: Pod "downwardapi-volume-ddaea553-bccf-460d-844d-080880050310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006963804s +STEP: Saw pod success 03/07/23 03:35:35.831 +Mar 7 03:35:35.831: INFO: Pod "downwardapi-volume-ddaea553-bccf-460d-844d-080880050310" satisfied condition "Succeeded or Failed" +Mar 7 03:35:35.832: INFO: Trying to get logs from node node-2 pod downwardapi-volume-ddaea553-bccf-460d-844d-080880050310 container client-container: +STEP: delete the pod 03/07/23 03:35:35.837 +Mar 7 03:35:35.852: INFO: Waiting for pod downwardapi-volume-ddaea553-bccf-460d-844d-080880050310 to disappear +Mar 7 03:35:35.854: INFO: Pod downwardapi-volume-ddaea553-bccf-460d-844d-080880050310 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 03:35:35.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2615" for this suite. 03/07/23 03:35:35.858 +{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","completed":189,"skipped":3234,"failed":0} +------------------------------ +• [4.063 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:260 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:35:31.798 + Mar 7 03:35:31.798: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:35:31.799 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:31.811 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:31.813 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:260 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:35:31.815 + Mar 7 03:35:31.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ddaea553-bccf-460d-844d-080880050310" in namespace "downward-api-2615" to be "Succeeded or Failed" + Mar 7 03:35:31.828: INFO: Pod "downwardapi-volume-ddaea553-bccf-460d-844d-080880050310": Phase="Pending", Reason="", readiness=false. Elapsed: 3.937102ms + Mar 7 03:35:33.831: INFO: Pod "downwardapi-volume-ddaea553-bccf-460d-844d-080880050310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007718791s + Mar 7 03:35:35.831: INFO: Pod "downwardapi-volume-ddaea553-bccf-460d-844d-080880050310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006963804s + STEP: Saw pod success 03/07/23 03:35:35.831 + Mar 7 03:35:35.831: INFO: Pod "downwardapi-volume-ddaea553-bccf-460d-844d-080880050310" satisfied condition "Succeeded or Failed" + Mar 7 03:35:35.832: INFO: Trying to get logs from node node-2 pod downwardapi-volume-ddaea553-bccf-460d-844d-080880050310 container client-container: + STEP: delete the pod 03/07/23 03:35:35.837 + Mar 7 03:35:35.852: INFO: Waiting for pod downwardapi-volume-ddaea553-bccf-460d-844d-080880050310 to disappear + Mar 7 03:35:35.854: INFO: Pod downwardapi-volume-ddaea553-bccf-460d-844d-080880050310 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 03:35:35.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-2615" for this suite. 03/07/23 03:35:35.858 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 +[BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:35:35.862 +Mar 7 03:35:35.862: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pod-network-test 03/07/23 03:35:35.863 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:35.877 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:35.88 +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 +STEP: Performing setup for networking test in namespace pod-network-test-7099 03/07/23 03:35:35.881 +STEP: creating a selector 03/07/23 03:35:35.881 +STEP: Creating the service pods in kubernetes 03/07/23 03:35:35.882 +Mar 7 03:35:35.882: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Mar 7 03:35:35.900: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-7099" to be "running and ready" +Mar 7 03:35:35.907: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.755846ms +Mar 7 03:35:35.907: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:35:37.909: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.00940024s +Mar 7 03:35:37.909: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:35:39.911: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.010813759s +Mar 7 03:35:39.911: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:35:41.911: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.010884541s +Mar 7 03:35:41.911: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:35:43.910: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.009884855s +Mar 7 03:35:43.910: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:35:45.911: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.010901643s +Mar 7 03:35:45.911: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:35:47.916: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.016174629s +Mar 7 03:35:47.916: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Mar 7 03:35:47.916: INFO: Pod "netserver-0" satisfied condition "running and ready" +Mar 7 03:35:47.918: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-7099" to be "running and ready" +Mar 7 03:35:47.920: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 1.899616ms +Mar 7 03:35:47.920: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Mar 7 03:35:47.920: INFO: Pod "netserver-1" satisfied condition "running and ready" +Mar 7 03:35:47.922: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-7099" to be "running and ready" +Mar 7 03:35:47.926: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 3.765776ms +Mar 7 03:35:47.926: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Mar 7 03:35:47.926: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 03/07/23 03:35:47.928 +Mar 7 03:35:47.935: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-7099" to be "running" +Mar 7 03:35:47.938: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485361ms +Mar 7 03:35:49.940: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.00514549s +Mar 7 03:35:49.940: INFO: Pod "test-container-pod" satisfied condition "running" +Mar 7 03:35:49.942: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-7099" to be "running" +Mar 7 03:35:49.944: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1.784073ms +Mar 7 03:35:49.944: INFO: Pod "host-test-container-pod" satisfied condition "running" +Mar 7 03:35:49.946: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Mar 7 03:35:49.946: INFO: Going to poll 10.233.132.123 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Mar 7 03:35:49.948: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.132.123:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7099 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:35:49.948: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:35:49.948: INFO: ExecWithOptions: Clientset creation +Mar 7 03:35:49.948: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7099/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.233.132.123%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Mar 7 03:35:50.011: INFO: Found all 1 expected endpoints: [netserver-0] +Mar 7 03:35:50.011: INFO: Going to poll 10.233.84.160 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Mar 7 03:35:50.014: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.84.160:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7099 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:35:50.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:35:50.015: INFO: ExecWithOptions: Clientset creation +Mar 7 03:35:50.015: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7099/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.233.84.160%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Mar 7 03:35:50.076: INFO: Found all 1 expected endpoints: [netserver-1] +Mar 7 03:35:50.077: INFO: Going to poll 10.233.247.37 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Mar 7 03:35:50.079: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.247.37:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7099 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:35:50.079: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:35:50.080: INFO: ExecWithOptions: Clientset creation +Mar 7 03:35:50.080: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7099/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.233.247.37%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Mar 7 03:35:50.139: INFO: Found all 1 expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:187 +Mar 7 03:35:50.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-7099" for this suite. 03/07/23 03:35:50.143 +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","completed":190,"skipped":3237,"failed":0} +------------------------------ +• [SLOW TEST] [14.285 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:35:35.862 + Mar 7 03:35:35.862: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pod-network-test 03/07/23 03:35:35.863 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:35.877 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:35.88 + [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 + STEP: Performing setup for networking test in namespace pod-network-test-7099 03/07/23 03:35:35.881 + STEP: creating a selector 03/07/23 03:35:35.881 + STEP: Creating the service pods in kubernetes 03/07/23 03:35:35.882 + Mar 7 03:35:35.882: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Mar 7 03:35:35.900: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-7099" to be "running and ready" + Mar 7 03:35:35.907: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.755846ms + Mar 7 03:35:35.907: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:35:37.909: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.00940024s + Mar 7 03:35:37.909: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:35:39.911: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.010813759s + Mar 7 03:35:39.911: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:35:41.911: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.010884541s + Mar 7 03:35:41.911: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:35:43.910: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.009884855s + Mar 7 03:35:43.910: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:35:45.911: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.010901643s + Mar 7 03:35:45.911: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:35:47.916: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.016174629s + Mar 7 03:35:47.916: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Mar 7 03:35:47.916: INFO: Pod "netserver-0" satisfied condition "running and ready" + Mar 7 03:35:47.918: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-7099" to be "running and ready" + Mar 7 03:35:47.920: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 1.899616ms + Mar 7 03:35:47.920: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Mar 7 03:35:47.920: INFO: Pod "netserver-1" satisfied condition "running and ready" + Mar 7 03:35:47.922: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-7099" to be "running and ready" + Mar 7 03:35:47.926: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 3.765776ms + Mar 7 03:35:47.926: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Mar 7 03:35:47.926: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 03/07/23 03:35:47.928 + Mar 7 03:35:47.935: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-7099" to be "running" + Mar 7 03:35:47.938: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485361ms + Mar 7 03:35:49.940: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.00514549s + Mar 7 03:35:49.940: INFO: Pod "test-container-pod" satisfied condition "running" + Mar 7 03:35:49.942: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-7099" to be "running" + Mar 7 03:35:49.944: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1.784073ms + Mar 7 03:35:49.944: INFO: Pod "host-test-container-pod" satisfied condition "running" + Mar 7 03:35:49.946: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Mar 7 03:35:49.946: INFO: Going to poll 10.233.132.123 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Mar 7 03:35:49.948: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.132.123:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7099 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:35:49.948: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:35:49.948: INFO: ExecWithOptions: Clientset creation + Mar 7 03:35:49.948: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7099/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.233.132.123%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Mar 7 03:35:50.011: INFO: Found all 1 expected endpoints: [netserver-0] + Mar 7 03:35:50.011: INFO: Going to poll 10.233.84.160 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Mar 7 03:35:50.014: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.84.160:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7099 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:35:50.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:35:50.015: INFO: ExecWithOptions: Clientset creation + Mar 7 03:35:50.015: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7099/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.233.84.160%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Mar 7 03:35:50.076: INFO: Found all 1 expected endpoints: [netserver-1] + Mar 7 03:35:50.077: INFO: Going to poll 10.233.247.37 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Mar 7 03:35:50.079: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.247.37:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7099 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:35:50.079: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:35:50.080: INFO: ExecWithOptions: Clientset creation + Mar 7 03:35:50.080: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-7099/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.233.247.37%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Mar 7 03:35:50.139: INFO: Found all 1 expected endpoints: [netserver-2] + [AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:187 + Mar 7 03:35:50.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pod-network-test-7099" for this suite. 03/07/23 03:35:50.143 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:485 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:35:50.148 +Mar 7 03:35:50.148: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename security-context-test 03/07/23 03:35:50.151 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:50.165 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:50.167 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:49 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:485 +Mar 7 03:35:50.174: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ce0faef7-8a7b-4b22-a40c-8e4550972a6d" in namespace "security-context-test-5133" to be "Succeeded or Failed" +Mar 7 03:35:50.177: INFO: Pod "busybox-readonly-false-ce0faef7-8a7b-4b22-a40c-8e4550972a6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.02017ms +Mar 7 03:35:52.182: INFO: Pod "busybox-readonly-false-ce0faef7-8a7b-4b22-a40c-8e4550972a6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007539127s +Mar 7 03:35:54.181: INFO: Pod "busybox-readonly-false-ce0faef7-8a7b-4b22-a40c-8e4550972a6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006745239s +Mar 7 03:35:54.181: INFO: Pod "busybox-readonly-false-ce0faef7-8a7b-4b22-a40c-8e4550972a6d" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +Mar 7 03:35:54.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-5133" for this suite. 03/07/23 03:35:54.184 +{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","completed":191,"skipped":3259,"failed":0} +------------------------------ +• [4.059 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a pod with readOnlyRootFilesystem + test/e2e/common/node/security_context.go:429 + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:485 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:35:50.148 + Mar 7 03:35:50.148: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename security-context-test 03/07/23 03:35:50.151 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:50.165 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:50.167 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:49 + [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:485 + Mar 7 03:35:50.174: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ce0faef7-8a7b-4b22-a40c-8e4550972a6d" in namespace "security-context-test-5133" to be "Succeeded or Failed" + Mar 7 03:35:50.177: INFO: Pod "busybox-readonly-false-ce0faef7-8a7b-4b22-a40c-8e4550972a6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.02017ms + Mar 7 03:35:52.182: INFO: Pod "busybox-readonly-false-ce0faef7-8a7b-4b22-a40c-8e4550972a6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007539127s + Mar 7 03:35:54.181: INFO: Pod "busybox-readonly-false-ce0faef7-8a7b-4b22-a40c-8e4550972a6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006745239s + Mar 7 03:35:54.181: INFO: Pod "busybox-readonly-false-ce0faef7-8a7b-4b22-a40c-8e4550972a6d" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 + Mar 7 03:35:54.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "security-context-test-5133" for this suite. 03/07/23 03:35:54.184 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 +[BeforeEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:35:54.207 +Mar 7 03:35:54.208: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename events 03/07/23 03:35:54.208 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:54.224 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:54.226 +[It] should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 +STEP: creating a test event 03/07/23 03:35:54.23 +STEP: listing all events in all namespaces 03/07/23 03:35:54.234 +STEP: patching the test event 03/07/23 03:35:54.238 +STEP: fetching the test event 03/07/23 03:35:54.243 +STEP: updating the test event 03/07/23 03:35:54.245 +STEP: getting the test event 03/07/23 03:35:54.253 +STEP: deleting the test event 03/07/23 03:35:54.256 +STEP: listing all events in all namespaces 03/07/23 03:35:54.26 +[AfterEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:187 +Mar 7 03:35:54.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-4205" for this suite. 03/07/23 03:35:54.27 +{"msg":"PASSED [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]","completed":192,"skipped":3274,"failed":0} +------------------------------ +• [0.066 seconds] +[sig-instrumentation] Events +test/e2e/instrumentation/common/framework.go:23 + should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:35:54.207 + Mar 7 03:35:54.208: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename events 03/07/23 03:35:54.208 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:54.224 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:54.226 + [It] should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 + STEP: creating a test event 03/07/23 03:35:54.23 + STEP: listing all events in all namespaces 03/07/23 03:35:54.234 + STEP: patching the test event 03/07/23 03:35:54.238 + STEP: fetching the test event 03/07/23 03:35:54.243 + STEP: updating the test event 03/07/23 03:35:54.245 + STEP: getting the test event 03/07/23 03:35:54.253 + STEP: deleting the test event 03/07/23 03:35:54.256 + STEP: listing all events in all namespaces 03/07/23 03:35:54.26 + [AfterEach] [sig-instrumentation] Events + test/e2e/framework/framework.go:187 + Mar 7 03:35:54.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "events-4205" for this suite. 03/07/23 03:35:54.27 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:35:54.275 +Mar 7 03:35:54.275: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-webhook 03/07/23 03:35:54.276 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:54.289 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:54.292 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 +STEP: Setting up server cert 03/07/23 03:35:54.294 +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 03/07/23 03:35:55.455 +STEP: Deploying the custom resource conversion webhook pod 03/07/23 03:35:55.461 +STEP: Wait for the deployment to be ready 03/07/23 03:35:55.482 +Mar 7 03:35:55.494: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created +Mar 7 03:35:57.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 3, 35, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 35, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 3, 35, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 35, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-59dfc5db8d\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service 03/07/23 03:35:59.504 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:35:59.517 +Mar 7 03:36:00.518: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 +Mar 7 03:36:00.521: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Creating a v1 custom resource 03/07/23 03:36:02.636 +STEP: v2 custom resource should be converted 03/07/23 03:36:02.64 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:36:03.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-75" for this suite. 03/07/23 03:36:03.165 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","completed":193,"skipped":3276,"failed":0} +------------------------------ +• [SLOW TEST] [8.968 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:35:54.275 + Mar 7 03:35:54.275: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-webhook 03/07/23 03:35:54.276 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:35:54.289 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:35:54.292 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 + STEP: Setting up server cert 03/07/23 03:35:54.294 + STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 03/07/23 03:35:55.455 + STEP: Deploying the custom resource conversion webhook pod 03/07/23 03:35:55.461 + STEP: Wait for the deployment to be ready 03/07/23 03:35:55.482 + Mar 7 03:35:55.494: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created + Mar 7 03:35:57.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 7, 3, 35, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 35, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 3, 35, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 35, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-59dfc5db8d\" is progressing."}}, CollisionCount:(*int32)(nil)} + STEP: Deploying the webhook service 03/07/23 03:35:59.504 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:35:59.517 + Mar 7 03:36:00.518: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 + [It] should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 + Mar 7 03:36:00.521: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Creating a v1 custom resource 03/07/23 03:36:02.636 + STEP: v2 custom resource should be converted 03/07/23 03:36:02.64 + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:36:03.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-webhook-75" for this suite. 03/07/23 03:36:03.165 + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:36:03.243 +Mar 7 03:36:03.243: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-watch 03/07/23 03:36:03.244 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:36:03.28 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:36:03.286 +[It] watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 +Mar 7 03:36:03.288: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Creating first CR 03/07/23 03:36:05.839 +Mar 7 03:36:05.842: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:05Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:05Z]] name:name1 resourceVersion:62093 uid:c5094afd-89f0-4c35-8718-e7f0a2cd718c] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR 03/07/23 03:36:15.844 +Mar 7 03:36:15.848: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:15Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:15Z]] name:name2 resourceVersion:62141 uid:f35f161a-d37f-48ae-8976-13d97d2a94ae] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR 03/07/23 03:36:25.848 +Mar 7 03:36:25.854: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:25Z]] name:name1 resourceVersion:62178 uid:c5094afd-89f0-4c35-8718-e7f0a2cd718c] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR 03/07/23 03:36:35.856 +Mar 7 03:36:35.887: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:35Z]] name:name2 resourceVersion:62214 uid:f35f161a-d37f-48ae-8976-13d97d2a94ae] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR 03/07/23 03:36:45.887 +Mar 7 03:36:45.893: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:25Z]] name:name1 resourceVersion:62252 uid:c5094afd-89f0-4c35-8718-e7f0a2cd718c] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR 03/07/23 03:36:55.894 +Mar 7 03:36:55.928: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:35Z]] name:name2 resourceVersion:62290 uid:f35f161a-d37f-48ae-8976-13d97d2a94ae] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:37:06.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-897" for this suite. 03/07/23 03:37:06.461 +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","completed":194,"skipped":3286,"failed":0} +------------------------------ +• [SLOW TEST] [63.240 seconds] +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + CustomResourceDefinition Watch + test/e2e/apimachinery/crd_watch.go:44 + watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:36:03.243 + Mar 7 03:36:03.243: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-watch 03/07/23 03:36:03.244 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:36:03.28 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:36:03.286 + [It] watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 + Mar 7 03:36:03.288: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Creating first CR 03/07/23 03:36:05.839 + Mar 7 03:36:05.842: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:05Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:05Z]] name:name1 resourceVersion:62093 uid:c5094afd-89f0-4c35-8718-e7f0a2cd718c] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Creating second CR 03/07/23 03:36:15.844 + Mar 7 03:36:15.848: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:15Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:15Z]] name:name2 resourceVersion:62141 uid:f35f161a-d37f-48ae-8976-13d97d2a94ae] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Modifying first CR 03/07/23 03:36:25.848 + Mar 7 03:36:25.854: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:25Z]] name:name1 resourceVersion:62178 uid:c5094afd-89f0-4c35-8718-e7f0a2cd718c] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Modifying second CR 03/07/23 03:36:35.856 + Mar 7 03:36:35.887: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:35Z]] name:name2 resourceVersion:62214 uid:f35f161a-d37f-48ae-8976-13d97d2a94ae] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Deleting first CR 03/07/23 03:36:45.887 + Mar 7 03:36:45.893: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:25Z]] name:name1 resourceVersion:62252 uid:c5094afd-89f0-4c35-8718-e7f0a2cd718c] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Deleting second CR 03/07/23 03:36:55.894 + Mar 7 03:36:55.928: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-07T03:36:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-07T03:36:35Z]] name:name2 resourceVersion:62290 uid:f35f161a-d37f-48ae-8976-13d97d2a94ae] num:map[num1:9223372036854775807 num2:1000000]]} + [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:37:06.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-watch-897" for this suite. 03/07/23 03:37:06.461 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1650 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:37:06.485 +Mar 7 03:37:06.485: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:37:06.486 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:06.503 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:06.505 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1650 +STEP: creating Agnhost RC 03/07/23 03:37:06.506 +Mar 7 03:37:06.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1964 create -f -' +Mar 7 03:37:07.723: INFO: stderr: "" +Mar 7 03:37:07.723: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 03/07/23 03:37:07.723 +Mar 7 03:37:08.726: INFO: Selector matched 1 pods for map[app:agnhost] +Mar 7 03:37:08.726: INFO: Found 0 / 1 +Mar 7 03:37:09.727: INFO: Selector matched 1 pods for map[app:agnhost] +Mar 7 03:37:09.727: INFO: Found 1 / 1 +Mar 7 03:37:09.727: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods 03/07/23 03:37:09.727 +Mar 7 03:37:09.729: INFO: Selector matched 1 pods for map[app:agnhost] +Mar 7 03:37:09.729: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Mar 7 03:37:09.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1964 patch pod agnhost-primary-s2sjc -p {"metadata":{"annotations":{"x":"y"}}}' +Mar 7 03:37:09.922: INFO: stderr: "" +Mar 7 03:37:09.922: INFO: stdout: "pod/agnhost-primary-s2sjc patched\n" +STEP: checking annotations 03/07/23 03:37:09.922 +Mar 7 03:37:09.926: INFO: Selector matched 1 pods for map[app:agnhost] +Mar 7 03:37:09.926: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:37:09.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1964" for this suite. 03/07/23 03:37:09.929 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","completed":195,"skipped":3296,"failed":0} +------------------------------ +• [3.450 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl patch + test/e2e/kubectl/kubectl.go:1644 + should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1650 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:37:06.485 + Mar 7 03:37:06.485: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:37:06.486 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:06.503 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:06.505 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1650 + STEP: creating Agnhost RC 03/07/23 03:37:06.506 + Mar 7 03:37:06.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1964 create -f -' + Mar 7 03:37:07.723: INFO: stderr: "" + Mar 7 03:37:07.723: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 03/07/23 03:37:07.723 + Mar 7 03:37:08.726: INFO: Selector matched 1 pods for map[app:agnhost] + Mar 7 03:37:08.726: INFO: Found 0 / 1 + Mar 7 03:37:09.727: INFO: Selector matched 1 pods for map[app:agnhost] + Mar 7 03:37:09.727: INFO: Found 1 / 1 + Mar 7 03:37:09.727: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + STEP: patching all pods 03/07/23 03:37:09.727 + Mar 7 03:37:09.729: INFO: Selector matched 1 pods for map[app:agnhost] + Mar 7 03:37:09.729: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Mar 7 03:37:09.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-1964 patch pod agnhost-primary-s2sjc -p {"metadata":{"annotations":{"x":"y"}}}' + Mar 7 03:37:09.922: INFO: stderr: "" + Mar 7 03:37:09.922: INFO: stdout: "pod/agnhost-primary-s2sjc patched\n" + STEP: checking annotations 03/07/23 03:37:09.922 + Mar 7 03:37:09.926: INFO: Selector matched 1 pods for map[app:agnhost] + Mar 7 03:37:09.926: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:37:09.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-1964" for this suite. 03/07/23 03:37:09.929 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:73 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:37:09.934 +Mar 7 03:37:09.934: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:37:09.935 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:09.95 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:09.952 +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:73 +STEP: Creating configMap with name projected-configmap-test-volume-84419eab-9601-40a4-b7cf-97777b321740 03/07/23 03:37:09.954 +STEP: Creating a pod to test consume configMaps 03/07/23 03:37:09.957 +Mar 7 03:37:09.964: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c" in namespace "projected-6589" to be "Succeeded or Failed" +Mar 7 03:37:09.967: INFO: Pod "pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.137431ms +Mar 7 03:37:11.971: INFO: Pod "pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006516002s +Mar 7 03:37:13.971: INFO: Pod "pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006866915s +STEP: Saw pod success 03/07/23 03:37:13.971 +Mar 7 03:37:13.971: INFO: Pod "pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c" satisfied condition "Succeeded or Failed" +Mar 7 03:37:13.973: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c container agnhost-container: +STEP: delete the pod 03/07/23 03:37:13.984 +Mar 7 03:37:14.012: INFO: Waiting for pod pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c to disappear +Mar 7 03:37:14.014: INFO: Pod pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +Mar 7 03:37:14.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6589" for this suite. 03/07/23 03:37:14.017 +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","completed":196,"skipped":3297,"failed":0} +------------------------------ +• [4.087 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:73 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:37:09.934 + Mar 7 03:37:09.934: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:37:09.935 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:09.95 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:09.952 + [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:73 + STEP: Creating configMap with name projected-configmap-test-volume-84419eab-9601-40a4-b7cf-97777b321740 03/07/23 03:37:09.954 + STEP: Creating a pod to test consume configMaps 03/07/23 03:37:09.957 + Mar 7 03:37:09.964: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c" in namespace "projected-6589" to be "Succeeded or Failed" + Mar 7 03:37:09.967: INFO: Pod "pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.137431ms + Mar 7 03:37:11.971: INFO: Pod "pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006516002s + Mar 7 03:37:13.971: INFO: Pod "pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006866915s + STEP: Saw pod success 03/07/23 03:37:13.971 + Mar 7 03:37:13.971: INFO: Pod "pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c" satisfied condition "Succeeded or Failed" + Mar 7 03:37:13.973: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c container agnhost-container: + STEP: delete the pod 03/07/23 03:37:13.984 + Mar 7 03:37:14.012: INFO: Waiting for pod pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c to disappear + Mar 7 03:37:14.014: INFO: Pod pod-projected-configmaps-ab8770aa-ee57-4240-a4d6-c14c38411b2c no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 + Mar 7 03:37:14.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-6589" for this suite. 03/07/23 03:37:14.017 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:37:14.022 +Mar 7 03:37:14.022: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename deployment 03/07/23 03:37:14.023 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:14.038 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:14.04 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 +Mar 7 03:37:14.049: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Mar 7 03:37:19.060: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 03/07/23 03:37:19.06 +Mar 7 03:37:19.060: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up 03/07/23 03:37:19.091 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Mar 7 03:37:19.101: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-1015 d8b625fc-3d49-4c4b-a9ca-8e4868e727a1 62460 1 2023-03-07 03:37:19 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-03-07 03:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003247178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + +Mar 7 03:37:19.107: INFO: New ReplicaSet "test-cleanup-deployment-69cb9c5497" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:{test-cleanup-deployment-69cb9c5497 deployment-1015 684c7202-3bb0-496f-90b8-8b6c9d2c73b3 62462 1 2023-03-07 03:37:19 +0000 UTC map[name:cleanup-pod pod-template-hash:69cb9c5497] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment d8b625fc-3d49-4c4b-a9ca-8e4868e727a1 0xc0036ea1d7 0xc0036ea1d8}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8b625fc-3d49-4c4b-a9ca-8e4868e727a1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 69cb9c5497,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:69cb9c5497] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036ea278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Mar 7 03:37:19.107: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Mar 7 03:37:19.107: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1015 0cc6a4e6-fff9-4347-b63a-f775078431ab 62461 1 2023-03-07 03:37:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment d8b625fc-3d49-4c4b-a9ca-8e4868e727a1 0xc0036ea097 0xc0036ea098}] [] [{e2e.test Update apps/v1 2023-03-07 03:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:37:15 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-03-07 03:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"d8b625fc-3d49-4c4b-a9ca-8e4868e727a1\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0036ea158 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Mar 7 03:37:19.110: INFO: Pod "test-cleanup-controller-trd5j" is available: +&Pod{ObjectMeta:{test-cleanup-controller-trd5j test-cleanup-controller- deployment-1015 011d173f-5762-42fc-bd15-dedfe7fd9b9e 62444 0 2023-03-07 03:37:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[cni.projectcalico.org/containerID:18ae2cf06e4c57c144748dea667ab93b8a225be4c3582f16771b3f92571e1b7e cni.projectcalico.org/podIP:10.233.247.46/32 cni.projectcalico.org/podIPs:10.233.247.46/32] [{apps/v1 ReplicaSet test-cleanup-controller 0cc6a4e6-fff9-4347-b63a-f775078431ab 0xc0036ea727 0xc0036ea728}] [] [{calico Update v1 2023-03-07 03:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-03-07 03:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cc6a4e6-fff9-4347-b63a-f775078431ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:37:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.46\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jkcm9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkcm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:37:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:37:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:37:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:37:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.46,StartTime:2023-03-07 03:37:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:37:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://e70a767b5d046be7711a254cbb790db80a56cfa77f2e67672705459593dc6ed0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +Mar 7 03:37:19.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1015" for this suite. 03/07/23 03:37:19.115 +{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","completed":197,"skipped":3301,"failed":0} +------------------------------ +• [SLOW TEST] [5.100 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:37:14.022 + Mar 7 03:37:14.022: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename deployment 03/07/23 03:37:14.023 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:14.038 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:14.04 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 + Mar 7 03:37:14.049: INFO: Pod name cleanup-pod: Found 0 pods out of 1 + Mar 7 03:37:19.060: INFO: Pod name cleanup-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 03/07/23 03:37:19.06 + Mar 7 03:37:19.060: INFO: Creating deployment test-cleanup-deployment + STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up 03/07/23 03:37:19.091 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Mar 7 03:37:19.101: INFO: Deployment "test-cleanup-deployment": + &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1015 d8b625fc-3d49-4c4b-a9ca-8e4868e727a1 62460 1 2023-03-07 03:37:19 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-03-07 03:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003247178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + + Mar 7 03:37:19.107: INFO: New ReplicaSet "test-cleanup-deployment-69cb9c5497" of Deployment "test-cleanup-deployment": + &ReplicaSet{ObjectMeta:{test-cleanup-deployment-69cb9c5497 deployment-1015 684c7202-3bb0-496f-90b8-8b6c9d2c73b3 62462 1 2023-03-07 03:37:19 +0000 UTC map[name:cleanup-pod pod-template-hash:69cb9c5497] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment d8b625fc-3d49-4c4b-a9ca-8e4868e727a1 0xc0036ea1d7 0xc0036ea1d8}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8b625fc-3d49-4c4b-a9ca-8e4868e727a1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 69cb9c5497,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:69cb9c5497] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036ea278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Mar 7 03:37:19.107: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": + Mar 7 03:37:19.107: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1015 0cc6a4e6-fff9-4347-b63a-f775078431ab 62461 1 2023-03-07 03:37:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment d8b625fc-3d49-4c4b-a9ca-8e4868e727a1 0xc0036ea097 0xc0036ea098}] [] [{e2e.test Update apps/v1 2023-03-07 03:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:37:15 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-03-07 03:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"d8b625fc-3d49-4c4b-a9ca-8e4868e727a1\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0036ea158 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Mar 7 03:37:19.110: INFO: Pod "test-cleanup-controller-trd5j" is available: + &Pod{ObjectMeta:{test-cleanup-controller-trd5j test-cleanup-controller- deployment-1015 011d173f-5762-42fc-bd15-dedfe7fd9b9e 62444 0 2023-03-07 03:37:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[cni.projectcalico.org/containerID:18ae2cf06e4c57c144748dea667ab93b8a225be4c3582f16771b3f92571e1b7e cni.projectcalico.org/podIP:10.233.247.46/32 cni.projectcalico.org/podIPs:10.233.247.46/32] [{apps/v1 ReplicaSet test-cleanup-controller 0cc6a4e6-fff9-4347-b63a-f775078431ab 0xc0036ea727 0xc0036ea728}] [] [{calico Update v1 2023-03-07 03:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-03-07 03:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cc6a4e6-fff9-4347-b63a-f775078431ab\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:37:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.46\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jkcm9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkcm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:37:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:37:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:37:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:37:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.46,StartTime:2023-03-07 03:37:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:37:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://e70a767b5d046be7711a254cbb790db80a56cfa77f2e67672705459593dc6ed0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 + Mar 7 03:37:19.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "deployment-1015" for this suite. 03/07/23 03:37:19.115 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:37:19.124 +Mar 7 03:37:19.124: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename custom-resource-definition 03/07/23 03:37:19.125 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:19.15 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:19.152 +[It] custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 +Mar 7 03:37:19.154: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:37:22.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-2852" for this suite. 03/07/23 03:37:22.271 +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","completed":198,"skipped":3318,"failed":0} +------------------------------ +• [3.152 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:37:19.124 + Mar 7 03:37:19.124: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename custom-resource-definition 03/07/23 03:37:19.125 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:19.15 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:19.152 + [It] custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 + Mar 7 03:37:19.154: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:37:22.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "custom-resource-definition-2852" for this suite. 03/07/23 03:37:22.271 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:67 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:37:22.277 +Mar 7 03:37:22.277: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:37:22.278 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:22.29 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:22.293 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:67 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:37:22.295 +Mar 7 03:37:22.301: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889" in namespace "projected-6773" to be "Succeeded or Failed" +Mar 7 03:37:22.303: INFO: Pod "downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15982ms +Mar 7 03:37:24.306: INFO: Pod "downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005462335s +Mar 7 03:37:26.307: INFO: Pod "downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006275855s +STEP: Saw pod success 03/07/23 03:37:26.307 +Mar 7 03:37:26.307: INFO: Pod "downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889" satisfied condition "Succeeded or Failed" +Mar 7 03:37:26.310: INFO: Trying to get logs from node node-2 pod downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889 container client-container: +STEP: delete the pod 03/07/23 03:37:26.314 +Mar 7 03:37:26.334: INFO: Waiting for pod downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889 to disappear +Mar 7 03:37:26.337: INFO: Pod downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 03:37:26.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6773" for this suite. 03/07/23 03:37:26.34 +{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","completed":199,"skipped":3348,"failed":0} +------------------------------ +• [4.067 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:37:22.277 + Mar 7 03:37:22.277: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:37:22.278 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:22.29 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:22.293 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:67 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:37:22.295 + Mar 7 03:37:22.301: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889" in namespace "projected-6773" to be "Succeeded or Failed" + Mar 7 03:37:22.303: INFO: Pod "downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15982ms + Mar 7 03:37:24.306: INFO: Pod "downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005462335s + Mar 7 03:37:26.307: INFO: Pod "downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006275855s + STEP: Saw pod success 03/07/23 03:37:26.307 + Mar 7 03:37:26.307: INFO: Pod "downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889" satisfied condition "Succeeded or Failed" + Mar 7 03:37:26.310: INFO: Trying to get logs from node node-2 pod downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889 container client-container: + STEP: delete the pod 03/07/23 03:37:26.314 + Mar 7 03:37:26.334: INFO: Waiting for pod downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889 to disappear + Mar 7 03:37:26.337: INFO: Pod downwardapi-volume-fbdc8ffc-877e-43c7-b9b5-907a2eddd889 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 03:37:26.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-6773" for this suite. 03/07/23 03:37:26.34 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:250 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:37:26.346 +Mar 7 03:37:26.346: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename namespaces 03/07/23 03:37:26.347 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:26.361 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:26.363 +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:250 +STEP: Creating a test namespace 03/07/23 03:37:26.365 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:26.377 +STEP: Creating a service in the namespace 03/07/23 03:37:26.379 +STEP: Deleting the namespace 03/07/23 03:37:26.391 +STEP: Waiting for the namespace to be removed. 03/07/23 03:37:26.401 +STEP: Recreating the namespace 03/07/23 03:37:32.405 +STEP: Verifying there is no service in the namespace 03/07/23 03:37:32.418 +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:37:32.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-6246" for this suite. 03/07/23 03:37:32.424 +STEP: Destroying namespace "nsdeletetest-4366" for this suite. 03/07/23 03:37:32.43 +Mar 7 03:37:32.432: INFO: Namespace nsdeletetest-4366 was already deleted +STEP: Destroying namespace "nsdeletetest-8897" for this suite. 03/07/23 03:37:32.432 +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","completed":200,"skipped":3361,"failed":0} +------------------------------ +• [SLOW TEST] [6.091 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:250 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:37:26.346 + Mar 7 03:37:26.346: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename namespaces 03/07/23 03:37:26.347 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:26.361 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:26.363 + [It] should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:250 + STEP: Creating a test namespace 03/07/23 03:37:26.365 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:26.377 + STEP: Creating a service in the namespace 03/07/23 03:37:26.379 + STEP: Deleting the namespace 03/07/23 03:37:26.391 + STEP: Waiting for the namespace to be removed. 03/07/23 03:37:26.401 + STEP: Recreating the namespace 03/07/23 03:37:32.405 + STEP: Verifying there is no service in the namespace 03/07/23 03:37:32.418 + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:37:32.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "namespaces-6246" for this suite. 03/07/23 03:37:32.424 + STEP: Destroying namespace "nsdeletetest-4366" for this suite. 03/07/23 03:37:32.43 + Mar 7 03:37:32.432: INFO: Namespace nsdeletetest-4366 was already deleted + STEP: Destroying namespace "nsdeletetest-8897" for this suite. 03/07/23 03:37:32.432 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:220 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:37:32.438 +Mar 7 03:37:32.438: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 03:37:32.439 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:32.451 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:32.454 +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:220 +STEP: Counting existing ResourceQuota 03/07/23 03:37:32.456 +STEP: Creating a ResourceQuota 03/07/23 03:37:37.461 +STEP: Ensuring resource quota status is calculated 03/07/23 03:37:37.468 +STEP: Creating a Pod that fits quota 03/07/23 03:37:39.472 +STEP: Ensuring ResourceQuota status captures the pod usage 03/07/23 03:37:39.506 +STEP: Not allowing a pod to be created that exceeds remaining quota 03/07/23 03:37:41.53 +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) 03/07/23 03:37:41.532 +STEP: Ensuring a pod cannot update its resource requirements 03/07/23 03:37:41.534 +STEP: Ensuring attempts to update pod resource requirements did not change quota usage 03/07/23 03:37:41.537 +STEP: Deleting the pod 03/07/23 03:37:43.541 +STEP: Ensuring resource quota status released the pod usage 03/07/23 03:37:43.548 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 03:37:45.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-1699" for this suite. 03/07/23 03:37:45.554 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","completed":201,"skipped":3372,"failed":0} +------------------------------ +• [SLOW TEST] [13.124 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:220 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:37:32.438 + Mar 7 03:37:32.438: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 03:37:32.439 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:32.451 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:32.454 + [It] should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:220 + STEP: Counting existing ResourceQuota 03/07/23 03:37:32.456 + STEP: Creating a ResourceQuota 03/07/23 03:37:37.461 + STEP: Ensuring resource quota status is calculated 03/07/23 03:37:37.468 + STEP: Creating a Pod that fits quota 03/07/23 03:37:39.472 + STEP: Ensuring ResourceQuota status captures the pod usage 03/07/23 03:37:39.506 + STEP: Not allowing a pod to be created that exceeds remaining quota 03/07/23 03:37:41.53 + STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) 03/07/23 03:37:41.532 + STEP: Ensuring a pod cannot update its resource requirements 03/07/23 03:37:41.534 + STEP: Ensuring attempts to update pod resource requirements did not change quota usage 03/07/23 03:37:41.537 + STEP: Deleting the pod 03/07/23 03:37:43.541 + STEP: Ensuring resource quota status released the pod usage 03/07/23 03:37:43.548 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 03:37:45.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-1699" for this suite. 03/07/23 03:37:45.554 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:176 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:37:45.564 +Mar 7 03:37:45.564: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:37:45.565 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:45.578 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:45.581 +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:176 +STEP: Creating a pod to test emptydir 0666 on node default medium 03/07/23 03:37:45.583 +Mar 7 03:37:45.590: INFO: Waiting up to 5m0s for pod "pod-ff4028fb-d3be-43df-9f15-7b8625fcf858" in namespace "emptydir-8981" to be "Succeeded or Failed" +Mar 7 03:37:45.594: INFO: Pod "pod-ff4028fb-d3be-43df-9f15-7b8625fcf858": Phase="Pending", Reason="", readiness=false. Elapsed: 3.559343ms +Mar 7 03:37:47.601: INFO: Pod "pod-ff4028fb-d3be-43df-9f15-7b8625fcf858": Phase="Running", Reason="", readiness=false. Elapsed: 2.010178164s +Mar 7 03:37:49.598: INFO: Pod "pod-ff4028fb-d3be-43df-9f15-7b8625fcf858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007676136s +STEP: Saw pod success 03/07/23 03:37:49.598 +Mar 7 03:37:49.598: INFO: Pod "pod-ff4028fb-d3be-43df-9f15-7b8625fcf858" satisfied condition "Succeeded or Failed" +Mar 7 03:37:49.600: INFO: Trying to get logs from node node-2 pod pod-ff4028fb-d3be-43df-9f15-7b8625fcf858 container test-container: +STEP: delete the pod 03/07/23 03:37:49.605 +Mar 7 03:37:49.614: INFO: Waiting for pod pod-ff4028fb-d3be-43df-9f15-7b8625fcf858 to disappear +Mar 7 03:37:49.616: INFO: Pod pod-ff4028fb-d3be-43df-9f15-7b8625fcf858 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:37:49.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8981" for this suite. 03/07/23 03:37:49.619 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":202,"skipped":3409,"failed":0} +------------------------------ +• [4.059 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:176 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:37:45.564 + Mar 7 03:37:45.564: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:37:45.565 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:45.578 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:45.581 + [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:176 + STEP: Creating a pod to test emptydir 0666 on node default medium 03/07/23 03:37:45.583 + Mar 7 03:37:45.590: INFO: Waiting up to 5m0s for pod "pod-ff4028fb-d3be-43df-9f15-7b8625fcf858" in namespace "emptydir-8981" to be "Succeeded or Failed" + Mar 7 03:37:45.594: INFO: Pod "pod-ff4028fb-d3be-43df-9f15-7b8625fcf858": Phase="Pending", Reason="", readiness=false. Elapsed: 3.559343ms + Mar 7 03:37:47.601: INFO: Pod "pod-ff4028fb-d3be-43df-9f15-7b8625fcf858": Phase="Running", Reason="", readiness=false. Elapsed: 2.010178164s + Mar 7 03:37:49.598: INFO: Pod "pod-ff4028fb-d3be-43df-9f15-7b8625fcf858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007676136s + STEP: Saw pod success 03/07/23 03:37:49.598 + Mar 7 03:37:49.598: INFO: Pod "pod-ff4028fb-d3be-43df-9f15-7b8625fcf858" satisfied condition "Succeeded or Failed" + Mar 7 03:37:49.600: INFO: Trying to get logs from node node-2 pod pod-ff4028fb-d3be-43df-9f15-7b8625fcf858 container test-container: + STEP: delete the pod 03/07/23 03:37:49.605 + Mar 7 03:37:49.614: INFO: Waiting for pod pod-ff4028fb-d3be-43df-9f15-7b8625fcf858 to disappear + Mar 7 03:37:49.616: INFO: Pod pod-ff4028fb-d3be-43df-9f15-7b8625fcf858 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:37:49.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-8981" for this suite. 03/07/23 03:37:49.619 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:161 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:37:49.624 +Mar 7 03:37:49.624: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:37:49.625 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:49.649 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:49.651 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:161 +STEP: Creating the pod 03/07/23 03:37:49.653 +Mar 7 03:37:49.660: INFO: Waiting up to 5m0s for pod "annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278" in namespace "projected-8895" to be "running and ready" +Mar 7 03:37:49.664: INFO: Pod "annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278": Phase="Pending", Reason="", readiness=false. Elapsed: 3.845032ms +Mar 7 03:37:49.664: INFO: The phase of Pod annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:37:51.668: INFO: Pod "annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278": Phase="Running", Reason="", readiness=true. Elapsed: 2.008316118s +Mar 7 03:37:51.669: INFO: The phase of Pod annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278 is Running (Ready = true) +Mar 7 03:37:51.669: INFO: Pod "annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278" satisfied condition "running and ready" +Mar 7 03:37:52.186: INFO: Successfully updated pod "annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278" +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 03:37:56.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8895" for this suite. 03/07/23 03:37:56.216 +{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","completed":203,"skipped":3436,"failed":0} +------------------------------ +• [SLOW TEST] [6.611 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:161 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:37:49.624 + Mar 7 03:37:49.624: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:37:49.625 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:49.649 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:49.651 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:161 + STEP: Creating the pod 03/07/23 03:37:49.653 + Mar 7 03:37:49.660: INFO: Waiting up to 5m0s for pod "annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278" in namespace "projected-8895" to be "running and ready" + Mar 7 03:37:49.664: INFO: Pod "annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278": Phase="Pending", Reason="", readiness=false. Elapsed: 3.845032ms + Mar 7 03:37:49.664: INFO: The phase of Pod annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:37:51.668: INFO: Pod "annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278": Phase="Running", Reason="", readiness=true. Elapsed: 2.008316118s + Mar 7 03:37:51.669: INFO: The phase of Pod annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278 is Running (Ready = true) + Mar 7 03:37:51.669: INFO: Pod "annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278" satisfied condition "running and ready" + Mar 7 03:37:52.186: INFO: Successfully updated pod "annotationupdate7a1b7544-a295-4c7c-bb4d-7ff082c4f278" + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 03:37:56.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-8895" for this suite. 03/07/23 03:37:56.216 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:37:56.236 +Mar 7 03:37:56.236: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replicaset 03/07/23 03:37:56.237 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:56.25 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:56.253 +[It] Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 +Mar 7 03:37:56.263: INFO: Pod name sample-pod: Found 0 pods out of 1 +Mar 7 03:38:01.269: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 03/07/23 03:38:01.269 +STEP: Scaling up "test-rs" replicaset 03/07/23 03:38:01.269 +Mar 7 03:38:01.304: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet 03/07/23 03:38:01.304 +W0307 03:38:01.312629 22 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Mar 7 03:38:01.314: INFO: observed ReplicaSet test-rs in namespace replicaset-2172 with ReadyReplicas 1, AvailableReplicas 1 +Mar 7 03:38:01.323: INFO: observed ReplicaSet test-rs in namespace replicaset-2172 with ReadyReplicas 1, AvailableReplicas 1 +Mar 7 03:38:01.334: INFO: observed ReplicaSet test-rs in namespace replicaset-2172 with ReadyReplicas 1, AvailableReplicas 1 +Mar 7 03:38:01.348: INFO: observed ReplicaSet test-rs in namespace replicaset-2172 with ReadyReplicas 1, AvailableReplicas 1 +Mar 7 03:38:02.567: INFO: observed ReplicaSet test-rs in namespace replicaset-2172 with ReadyReplicas 2, AvailableReplicas 2 +Mar 7 03:38:02.897: INFO: observed Replicaset test-rs in namespace replicaset-2172 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +Mar 7 03:38:02.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-2172" for this suite. 03/07/23 03:38:02.9 +{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","completed":204,"skipped":3461,"failed":0} +------------------------------ +• [SLOW TEST] [6.670 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:37:56.236 + Mar 7 03:37:56.236: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replicaset 03/07/23 03:37:56.237 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:37:56.25 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:37:56.253 + [It] Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 + Mar 7 03:37:56.263: INFO: Pod name sample-pod: Found 0 pods out of 1 + Mar 7 03:38:01.269: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 03/07/23 03:38:01.269 + STEP: Scaling up "test-rs" replicaset 03/07/23 03:38:01.269 + Mar 7 03:38:01.304: INFO: Updating replica set "test-rs" + STEP: patching the ReplicaSet 03/07/23 03:38:01.304 + W0307 03:38:01.312629 22 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Mar 7 03:38:01.314: INFO: observed ReplicaSet test-rs in namespace replicaset-2172 with ReadyReplicas 1, AvailableReplicas 1 + Mar 7 03:38:01.323: INFO: observed ReplicaSet test-rs in namespace replicaset-2172 with ReadyReplicas 1, AvailableReplicas 1 + Mar 7 03:38:01.334: INFO: observed ReplicaSet test-rs in namespace replicaset-2172 with ReadyReplicas 1, AvailableReplicas 1 + Mar 7 03:38:01.348: INFO: observed ReplicaSet test-rs in namespace replicaset-2172 with ReadyReplicas 1, AvailableReplicas 1 + Mar 7 03:38:02.567: INFO: observed ReplicaSet test-rs in namespace replicaset-2172 with ReadyReplicas 2, AvailableReplicas 2 + Mar 7 03:38:02.897: INFO: observed Replicaset test-rs in namespace replicaset-2172 with ReadyReplicas 3 found true + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 + Mar 7 03:38:02.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replicaset-2172" for this suite. 03/07/23 03:38:02.9 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:114 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:38:02.907 +Mar 7 03:38:02.907: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-lifecycle-hook 03/07/23 03:38:02.908 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:02.922 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:02.924 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 +STEP: create the container to handle the HTTPGet hook request. 03/07/23 03:38:02.928 +Mar 7 03:38:02.934: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-9591" to be "running and ready" +Mar 7 03:38:02.936: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124006ms +Mar 7 03:38:02.936: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:38:04.939: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.005398026s +Mar 7 03:38:04.939: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Mar 7 03:38:04.939: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:114 +STEP: create the pod with lifecycle hook 03/07/23 03:38:04.941 +Mar 7 03:38:04.945: INFO: Waiting up to 5m0s for pod "pod-with-prestop-exec-hook" in namespace "container-lifecycle-hook-9591" to be "running and ready" +Mar 7 03:38:04.947: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.521853ms +Mar 7 03:38:04.947: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:38:06.952: INFO: Pod "pod-with-prestop-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.00696285s +Mar 7 03:38:06.952: INFO: The phase of Pod pod-with-prestop-exec-hook is Running (Ready = true) +Mar 7 03:38:06.952: INFO: Pod "pod-with-prestop-exec-hook" satisfied condition "running and ready" +STEP: delete the pod with lifecycle hook 03/07/23 03:38:06.954 +Mar 7 03:38:06.963: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Mar 7 03:38:06.966: INFO: Pod pod-with-prestop-exec-hook still exists +Mar 7 03:38:08.967: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Mar 7 03:38:08.970: INFO: Pod pod-with-prestop-exec-hook still exists +Mar 7 03:38:10.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Mar 7 03:38:10.970: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook 03/07/23 03:38:10.97 +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 +Mar 7 03:38:10.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-9591" for this suite. 03/07/23 03:38:10.986 +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","completed":205,"skipped":3503,"failed":0} +------------------------------ +• [SLOW TEST] [8.083 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:114 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:38:02.907 + Mar 7 03:38:02.907: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-lifecycle-hook 03/07/23 03:38:02.908 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:02.922 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:02.924 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 + STEP: create the container to handle the HTTPGet hook request. 03/07/23 03:38:02.928 + Mar 7 03:38:02.934: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-9591" to be "running and ready" + Mar 7 03:38:02.936: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124006ms + Mar 7 03:38:02.936: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:38:04.939: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.005398026s + Mar 7 03:38:04.939: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Mar 7 03:38:04.939: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:114 + STEP: create the pod with lifecycle hook 03/07/23 03:38:04.941 + Mar 7 03:38:04.945: INFO: Waiting up to 5m0s for pod "pod-with-prestop-exec-hook" in namespace "container-lifecycle-hook-9591" to be "running and ready" + Mar 7 03:38:04.947: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.521853ms + Mar 7 03:38:04.947: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:38:06.952: INFO: Pod "pod-with-prestop-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.00696285s + Mar 7 03:38:06.952: INFO: The phase of Pod pod-with-prestop-exec-hook is Running (Ready = true) + Mar 7 03:38:06.952: INFO: Pod "pod-with-prestop-exec-hook" satisfied condition "running and ready" + STEP: delete the pod with lifecycle hook 03/07/23 03:38:06.954 + Mar 7 03:38:06.963: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Mar 7 03:38:06.966: INFO: Pod pod-with-prestop-exec-hook still exists + Mar 7 03:38:08.967: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Mar 7 03:38:08.970: INFO: Pod pod-with-prestop-exec-hook still exists + Mar 7 03:38:10.966: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Mar 7 03:38:10.970: INFO: Pod pod-with-prestop-exec-hook no longer exists + STEP: check prestop hook 03/07/23 03:38:10.97 + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 + Mar 7 03:38:10.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-lifecycle-hook-9591" for this suite. 03/07/23 03:38:10.986 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:56 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:38:10.992 +Mar 7 03:38:10.992: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:38:10.993 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:11.005 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:11.007 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:56 +STEP: Creating configMap with name projected-configmap-test-volume-4a0e5ba5-3057-4b00-b382-d94d77bb7a39 03/07/23 03:38:11.009 +STEP: Creating a pod to test consume configMaps 03/07/23 03:38:11.013 +Mar 7 03:38:11.018: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf" in namespace "projected-820" to be "Succeeded or Failed" +Mar 7 03:38:11.020: INFO: Pod "pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225021ms +Mar 7 03:38:13.024: INFO: Pod "pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006218513s +Mar 7 03:38:15.023: INFO: Pod "pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005134859s +STEP: Saw pod success 03/07/23 03:38:15.023 +Mar 7 03:38:15.023: INFO: Pod "pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf" satisfied condition "Succeeded or Failed" +Mar 7 03:38:15.026: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf container agnhost-container: +STEP: delete the pod 03/07/23 03:38:15.031 +Mar 7 03:38:15.038: INFO: Waiting for pod pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf to disappear +Mar 7 03:38:15.040: INFO: Pod pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +Mar 7 03:38:15.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-820" for this suite. 03/07/23 03:38:15.044 +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","completed":206,"skipped":3524,"failed":0} +------------------------------ +• [4.056 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:56 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:38:10.992 + Mar 7 03:38:10.992: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:38:10.993 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:11.005 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:11.007 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:56 + STEP: Creating configMap with name projected-configmap-test-volume-4a0e5ba5-3057-4b00-b382-d94d77bb7a39 03/07/23 03:38:11.009 + STEP: Creating a pod to test consume configMaps 03/07/23 03:38:11.013 + Mar 7 03:38:11.018: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf" in namespace "projected-820" to be "Succeeded or Failed" + Mar 7 03:38:11.020: INFO: Pod "pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225021ms + Mar 7 03:38:13.024: INFO: Pod "pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006218513s + Mar 7 03:38:15.023: INFO: Pod "pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005134859s + STEP: Saw pod success 03/07/23 03:38:15.023 + Mar 7 03:38:15.023: INFO: Pod "pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf" satisfied condition "Succeeded or Failed" + Mar 7 03:38:15.026: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf container agnhost-container: + STEP: delete the pod 03/07/23 03:38:15.031 + Mar 7 03:38:15.038: INFO: Waiting for pod pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf to disappear + Mar 7 03:38:15.040: INFO: Pod pod-projected-configmaps-681b5a92-623a-4439-8ff5-6a156b2b03bf no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 + Mar 7 03:38:15.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-820" for this suite. 03/07/23 03:38:15.044 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1274 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:38:15.049 +Mar 7 03:38:15.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:38:15.049 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:15.061 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:15.063 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1274 +Mar 7 03:38:15.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 create -f -' +Mar 7 03:38:16.302: INFO: stderr: "" +Mar 7 03:38:16.302: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Mar 7 03:38:16.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 create -f -' +Mar 7 03:38:16.583: INFO: stderr: "" +Mar 7 03:38:16.583: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 03/07/23 03:38:16.583 +Mar 7 03:38:17.586: INFO: Selector matched 1 pods for map[app:agnhost] +Mar 7 03:38:17.586: INFO: Found 1 / 1 +Mar 7 03:38:17.586: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Mar 7 03:38:17.589: INFO: Selector matched 1 pods for map[app:agnhost] +Mar 7 03:38:17.589: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Mar 7 03:38:17.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 describe pod agnhost-primary-rmctl' +Mar 7 03:38:17.769: INFO: stderr: "" +Mar 7 03:38:17.769: INFO: stdout: "Name: agnhost-primary-rmctl\nNamespace: kubectl-7834\nPriority: 0\nService Account: default\nNode: node-2/192.168.1.102\nStart Time: Tue, 07 Mar 2023 03:38:16 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: 3789a9637c4deac51abaee299e5d7d6cc645e6981084b43803fc643a6d656abc\n cni.projectcalico.org/podIP: 10.233.247.24/32\n cni.projectcalico.org/podIPs: 10.233.247.24/32\nStatus: Running\nIP: 10.233.247.24\nIPs:\n IP: 10.233.247.24\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://f443bb683704e1a60428bc64626af50a271ecc5422e535d9321c0e09a7868c6e\n Image: registry.k8s.io/e2e-test-images/agnhost:2.40\n Image ID: registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 07 Mar 2023 03:38:17 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-25vhp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-25vhp:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-7834/agnhost-primary-rmctl to node-2\n Normal Pulled 1s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.40\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 0s kubelet Started container agnhost-primary\n" +Mar 7 03:38:17.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 describe rc agnhost-primary' +Mar 7 03:38:17.954: INFO: stderr: "" +Mar 7 03:38:17.954: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7834\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.40\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 1s replication-controller Created pod: agnhost-primary-rmctl\n" +Mar 7 03:38:17.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 describe service agnhost-primary' +Mar 7 03:38:18.123: INFO: stderr: "" +Mar 7 03:38:18.123: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7834\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.96.18.253\nIPs: 10.96.18.253\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.233.247.24:6379\nSession Affinity: None\nEvents: \n" +Mar 7 03:38:18.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 describe node bootstrap' +Mar 7 03:38:18.333: INFO: stderr: "" +Mar 7 03:38:18.333: INFO: stdout: "Name: bootstrap\nRoles: bootstrap,etcd,infra,master,node\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=bootstrap\n kubernetes.io/os=linux\n metalk8s.scality.com/version=125.0.0-dev\n node-role.kubernetes.io/bootstrap=\n node-role.kubernetes.io/etcd=\n node-role.kubernetes.io/infra=\n node-role.kubernetes.io/master=\n node-role.kubernetes.io/node=\n topology.kubernetes.io/region=default\n topology.kubernetes.io/zone=default\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 192.168.2.100/24\n projectcalico.org/IPv4IPIPTunnelAddr: 10.233.132.64\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 07 Mar 2023 00:43:31 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: bootstrap\n AcquireTime: \n RenewTime: Tue, 07 Mar 2023 03:38:13 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Tue, 07 Mar 2023 02:23:56 +0000 Tue, 07 Mar 2023 02:23:56 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Tue, 07 Mar 2023 03:34:28 +0000 Tue, 07 Mar 2023 01:57:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 07 Mar 2023 03:34:28 +0000 Tue, 07 Mar 2023 01:57:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 07 Mar 2023 03:34:28 +0000 Tue, 07 Mar 2023 01:57:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 07 Mar 2023 03:34:28 +0000 Tue, 07 Mar 2023 01:57:40 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 192.168.1.100\n Hostname: bootstrap\nCapacity:\n cpu: 4\n ephemeral-storage: 104846316Ki\n hugepages-2Mi: 0\n memory: 14810400Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nAllocatable:\n cpu: 4\n ephemeral-storage: 96626364666\n hugepages-2Mi: 0\n memory: 14708000Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nSystem Info:\n Machine ID: 1a6e796012d546ea930557182eb37568\n System UUID: 1a6e7960-12d5-46ea-9305-57182eb37568\n Boot ID: 393f420b-4802-4ad3-9fde-fb4cf446175e\n Kernel Version: 4.18.0-372.32.1.el8_6.x86_64\n OS Image: Rocky Linux 8.6 (Green Obsidian)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.19\n Kubelet Version: v1.25.5\n Kube-Proxy Version: v1.25.5\nPodCIDR: 10.233.0.0/24\nPodCIDRs: 10.233.0.0/24\nNon-terminated Pods: (30 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system apiserver-proxy-bootstrap 25m (0%) 0 (0%) 32M (0%) 0 (0%) 173m\n kube-system backup-747d8c577b-wdcvl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 174m\n kube-system calico-kube-controllers-59685599d8-pvn74 0 (0%) 0 (0%) 0 (0%) 0 (0%) 174m\n kube-system calico-node-mlncm 250m (6%) 0 (0%) 0 (0%) 0 (0%) 74m\n kube-system coredns-5d7b997fcf-2j4jw 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 40m\n kube-system etcd-bootstrap 100m (2%) 0 (0%) 100Mi (0%) 0 (0%) 173m\n kube-system kube-apiserver-bootstrap 250m (6%) 0 (0%) 0 (0%) 0 (0%) 100m\n kube-system kube-controller-manager-bootstrap 200m (5%) 0 (0%) 0 (0%) 0 (0%) 173m\n kube-system kube-proxy-nlf5t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 74m\n kube-system kube-scheduler-bootstrap 100m (2%) 0 (0%) 0 (0%) 0 (0%) 172m\n kube-system metalk8s-operator-controller-manager-7d4764b947-crj2f 10m (0%) 500m (12%) 64Mi (0%) 128Mi (0%) 174m\n kube-system repositories-bootstrap 0 (0%) 0 (0%) 0 (0%) 0 (0%) 91m\n kube-system salt-master-bootstrap 0 (0%) 0 (0%) 0 (0%) 0 (0%) 174m\n kube-system storage-operator-78f5dcc84f-jwnzl 100m (2%) 200m (5%) 20Mi (0%) 100Mi (0%) 172m\n metalk8s-auth dex-57f9db7c4-hbrhr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84m\n metalk8s-auth dex-57f9db7c4-z6gh6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84m\n metalk8s-ingress ingress-control-plane-managed-vip-n2qb6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 92m\n metalk8s-ingress ingress-nginx-control-plane-controller-j9hsf 100m (2%) 0 (0%) 90Mi (0%) 0 (0%) 172m\n metalk8s-ingress ingress-nginx-controller-vjnvw 100m (2%) 0 (0%) 90Mi (0%) 0 (0%) 88m\n metalk8s-ingress ingress-nginx-defaultbackend-75c64bd745-65gwj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 172m\n metalk8s-logging fluent-bit-dzhms 100m (2%) 0 (0%) 200Mi (1%) 1Gi (7%) 172m\n metalk8s-monitoring metalk8s-alert-logger-84f87c86d-hflm5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 173m\n metalk8s-monitoring prometheus-adapter-6696954b59-qrxtn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 172m\n metalk8s-monitoring prometheus-operator-kube-state-metrics-f7d5dc499-t4szw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 173m\n metalk8s-monitoring prometheus-operator-operator-864bc5b5d-8m6lq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 172m\n metalk8s-monitoring prometheus-operator-prometheus-node-exporter-sl4bq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 173m\n metalk8s-monitoring thanos-query-6b9dc579dd-ctlrl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 172m\n metalk8s-ui metalk8s-ui-766c8b96cd-8cxcs 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 172m\n metalk8s-ui metalk8s-ui-766c8b96cd-tsx5v 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 172m\n sonobuoy sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 73m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1635m (40%) 700m (17%)\n memory 843597824 (5%) 1762Mi (12%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n scheduling.k8s.io/foo 0 0\nEvents: \n" +Mar 7 03:38:18.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 describe namespace kubectl-7834' +Mar 7 03:38:18.510: INFO: stderr: "" +Mar 7 03:38:18.510: INFO: stdout: "Name: kubectl-7834\nLabels: e2e-framework=kubectl\n e2e-run=6324f2f6-a3ba-451f-b5e1-c00345bec06a\n kubernetes.io/metadata.name=kubectl-7834\n pod-security.kubernetes.io/enforce=baseline\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:38:18.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7834" for this suite. 03/07/23 03:38:18.513 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","completed":207,"skipped":3526,"failed":0} +------------------------------ +• [3.502 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl describe + test/e2e/kubectl/kubectl.go:1268 + should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1274 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:38:15.049 + Mar 7 03:38:15.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:38:15.049 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:15.061 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:15.063 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1274 + Mar 7 03:38:15.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 create -f -' + Mar 7 03:38:16.302: INFO: stderr: "" + Mar 7 03:38:16.302: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + Mar 7 03:38:16.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 create -f -' + Mar 7 03:38:16.583: INFO: stderr: "" + Mar 7 03:38:16.583: INFO: stdout: "service/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 03/07/23 03:38:16.583 + Mar 7 03:38:17.586: INFO: Selector matched 1 pods for map[app:agnhost] + Mar 7 03:38:17.586: INFO: Found 1 / 1 + Mar 7 03:38:17.586: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + Mar 7 03:38:17.589: INFO: Selector matched 1 pods for map[app:agnhost] + Mar 7 03:38:17.589: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Mar 7 03:38:17.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 describe pod agnhost-primary-rmctl' + Mar 7 03:38:17.769: INFO: stderr: "" + Mar 7 03:38:17.769: INFO: stdout: "Name: agnhost-primary-rmctl\nNamespace: kubectl-7834\nPriority: 0\nService Account: default\nNode: node-2/192.168.1.102\nStart Time: Tue, 07 Mar 2023 03:38:16 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: 3789a9637c4deac51abaee299e5d7d6cc645e6981084b43803fc643a6d656abc\n cni.projectcalico.org/podIP: 10.233.247.24/32\n cni.projectcalico.org/podIPs: 10.233.247.24/32\nStatus: Running\nIP: 10.233.247.24\nIPs:\n IP: 10.233.247.24\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://f443bb683704e1a60428bc64626af50a271ecc5422e535d9321c0e09a7868c6e\n Image: registry.k8s.io/e2e-test-images/agnhost:2.40\n Image ID: registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 07 Mar 2023 03:38:17 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-25vhp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-25vhp:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-7834/agnhost-primary-rmctl to node-2\n Normal Pulled 1s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.40\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 0s kubelet Started container agnhost-primary\n" + Mar 7 03:38:17.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 describe rc agnhost-primary' + Mar 7 03:38:17.954: INFO: stderr: "" + Mar 7 03:38:17.954: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7834\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.40\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 1s replication-controller Created pod: agnhost-primary-rmctl\n" + Mar 7 03:38:17.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 describe service agnhost-primary' + Mar 7 03:38:18.123: INFO: stderr: "" + Mar 7 03:38:18.123: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7834\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.96.18.253\nIPs: 10.96.18.253\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.233.247.24:6379\nSession Affinity: None\nEvents: \n" + Mar 7 03:38:18.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 describe node bootstrap' + Mar 7 03:38:18.333: INFO: stderr: "" + Mar 7 03:38:18.333: INFO: stdout: "Name: bootstrap\nRoles: bootstrap,etcd,infra,master,node\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=bootstrap\n kubernetes.io/os=linux\n metalk8s.scality.com/version=125.0.0-dev\n node-role.kubernetes.io/bootstrap=\n node-role.kubernetes.io/etcd=\n node-role.kubernetes.io/infra=\n node-role.kubernetes.io/master=\n node-role.kubernetes.io/node=\n topology.kubernetes.io/region=default\n topology.kubernetes.io/zone=default\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 192.168.2.100/24\n projectcalico.org/IPv4IPIPTunnelAddr: 10.233.132.64\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 07 Mar 2023 00:43:31 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: bootstrap\n AcquireTime: \n RenewTime: Tue, 07 Mar 2023 03:38:13 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Tue, 07 Mar 2023 02:23:56 +0000 Tue, 07 Mar 2023 02:23:56 +0000 CalicoIsUp Calico is running on this node\n MemoryPressure False Tue, 07 Mar 2023 03:34:28 +0000 Tue, 07 Mar 2023 01:57:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 07 Mar 2023 03:34:28 +0000 Tue, 07 Mar 2023 01:57:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 07 Mar 2023 03:34:28 +0000 Tue, 07 Mar 2023 01:57:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 07 Mar 2023 03:34:28 +0000 Tue, 07 Mar 2023 01:57:40 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 192.168.1.100\n Hostname: bootstrap\nCapacity:\n cpu: 4\n ephemeral-storage: 104846316Ki\n hugepages-2Mi: 0\n memory: 14810400Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nAllocatable:\n cpu: 4\n ephemeral-storage: 96626364666\n hugepages-2Mi: 0\n memory: 14708000Ki\n pods: 110\n scheduling.k8s.io/foo: 5\nSystem Info:\n Machine ID: 1a6e796012d546ea930557182eb37568\n System UUID: 1a6e7960-12d5-46ea-9305-57182eb37568\n Boot ID: 393f420b-4802-4ad3-9fde-fb4cf446175e\n Kernel Version: 4.18.0-372.32.1.el8_6.x86_64\n OS Image: Rocky Linux 8.6 (Green Obsidian)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.19\n Kubelet Version: v1.25.5\n Kube-Proxy Version: v1.25.5\nPodCIDR: 10.233.0.0/24\nPodCIDRs: 10.233.0.0/24\nNon-terminated Pods: (30 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system apiserver-proxy-bootstrap 25m (0%) 0 (0%) 32M (0%) 0 (0%) 173m\n kube-system backup-747d8c577b-wdcvl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 174m\n kube-system calico-kube-controllers-59685599d8-pvn74 0 (0%) 0 (0%) 0 (0%) 0 (0%) 174m\n kube-system calico-node-mlncm 250m (6%) 0 (0%) 0 (0%) 0 (0%) 74m\n kube-system coredns-5d7b997fcf-2j4jw 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 40m\n kube-system etcd-bootstrap 100m (2%) 0 (0%) 100Mi (0%) 0 (0%) 173m\n kube-system kube-apiserver-bootstrap 250m (6%) 0 (0%) 0 (0%) 0 (0%) 100m\n kube-system kube-controller-manager-bootstrap 200m (5%) 0 (0%) 0 (0%) 0 (0%) 173m\n kube-system kube-proxy-nlf5t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 74m\n kube-system kube-scheduler-bootstrap 100m (2%) 0 (0%) 0 (0%) 0 (0%) 172m\n kube-system metalk8s-operator-controller-manager-7d4764b947-crj2f 10m (0%) 500m (12%) 64Mi (0%) 128Mi (0%) 174m\n kube-system repositories-bootstrap 0 (0%) 0 (0%) 0 (0%) 0 (0%) 91m\n kube-system salt-master-bootstrap 0 (0%) 0 (0%) 0 (0%) 0 (0%) 174m\n kube-system storage-operator-78f5dcc84f-jwnzl 100m (2%) 200m (5%) 20Mi (0%) 100Mi (0%) 172m\n metalk8s-auth dex-57f9db7c4-hbrhr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84m\n metalk8s-auth dex-57f9db7c4-z6gh6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84m\n metalk8s-ingress ingress-control-plane-managed-vip-n2qb6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 92m\n metalk8s-ingress ingress-nginx-control-plane-controller-j9hsf 100m (2%) 0 (0%) 90Mi (0%) 0 (0%) 172m\n metalk8s-ingress ingress-nginx-controller-vjnvw 100m (2%) 0 (0%) 90Mi (0%) 0 (0%) 88m\n metalk8s-ingress ingress-nginx-defaultbackend-75c64bd745-65gwj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 172m\n metalk8s-logging fluent-bit-dzhms 100m (2%) 0 (0%) 200Mi (1%) 1Gi (7%) 172m\n metalk8s-monitoring metalk8s-alert-logger-84f87c86d-hflm5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 173m\n metalk8s-monitoring prometheus-adapter-6696954b59-qrxtn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 172m\n metalk8s-monitoring prometheus-operator-kube-state-metrics-f7d5dc499-t4szw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 173m\n metalk8s-monitoring prometheus-operator-operator-864bc5b5d-8m6lq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 172m\n metalk8s-monitoring prometheus-operator-prometheus-node-exporter-sl4bq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 173m\n metalk8s-monitoring thanos-query-6b9dc579dd-ctlrl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 172m\n metalk8s-ui metalk8s-ui-766c8b96cd-8cxcs 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 172m\n metalk8s-ui metalk8s-ui-766c8b96cd-tsx5v 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 172m\n sonobuoy sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 73m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1635m (40%) 700m (17%)\n memory 843597824 (5%) 1762Mi (12%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n scheduling.k8s.io/foo 0 0\nEvents: \n" + Mar 7 03:38:18.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-7834 describe namespace kubectl-7834' + Mar 7 03:38:18.510: INFO: stderr: "" + Mar 7 03:38:18.510: INFO: stdout: "Name: kubectl-7834\nLabels: e2e-framework=kubectl\n e2e-run=6324f2f6-a3ba-451f-b5e1-c00345bec06a\n kubernetes.io/metadata.name=kubectl-7834\n pod-security.kubernetes.io/enforce=baseline\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:38:18.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-7834" for this suite. 03/07/23 03:38:18.513 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:123 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:38:18.551 +Mar 7 03:38:18.551: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:38:18.551 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:18.565 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:18.567 +[It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:123 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-f82c3a70-36b2-4beb-9c16-7f4169b72bd2 03/07/23 03:38:18.571 +STEP: Creating the pod 03/07/23 03:38:18.575 +Mar 7 03:38:18.580: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1" in namespace "projected-7243" to be "running and ready" +Mar 7 03:38:18.583: INFO: Pod "pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.959392ms +Mar 7 03:38:18.583: INFO: The phase of Pod pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:38:20.586: INFO: Pod "pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1": Phase="Running", Reason="", readiness=true. Elapsed: 2.006185383s +Mar 7 03:38:20.586: INFO: The phase of Pod pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1 is Running (Ready = true) +Mar 7 03:38:20.586: INFO: Pod "pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1" satisfied condition "running and ready" +STEP: Updating configmap projected-configmap-test-upd-f82c3a70-36b2-4beb-9c16-7f4169b72bd2 03/07/23 03:38:20.592 +STEP: waiting to observe update in volume 03/07/23 03:38:20.596 +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +Mar 7 03:38:22.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7243" for this suite. 03/07/23 03:38:22.616 +{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","completed":208,"skipped":3526,"failed":0} +------------------------------ +• [4.070 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:123 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:38:18.551 + Mar 7 03:38:18.551: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:38:18.551 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:18.565 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:18.567 + [It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:123 + STEP: Creating projection with configMap that has name projected-configmap-test-upd-f82c3a70-36b2-4beb-9c16-7f4169b72bd2 03/07/23 03:38:18.571 + STEP: Creating the pod 03/07/23 03:38:18.575 + Mar 7 03:38:18.580: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1" in namespace "projected-7243" to be "running and ready" + Mar 7 03:38:18.583: INFO: Pod "pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.959392ms + Mar 7 03:38:18.583: INFO: The phase of Pod pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:38:20.586: INFO: Pod "pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1": Phase="Running", Reason="", readiness=true. Elapsed: 2.006185383s + Mar 7 03:38:20.586: INFO: The phase of Pod pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1 is Running (Ready = true) + Mar 7 03:38:20.586: INFO: Pod "pod-projected-configmaps-e563c340-5fa4-4e16-a4c3-47407d8964d1" satisfied condition "running and ready" + STEP: Updating configmap projected-configmap-test-upd-f82c3a70-36b2-4beb-9c16-7f4169b72bd2 03/07/23 03:38:20.592 + STEP: waiting to observe update in volume 03/07/23 03:38:20.596 + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 + Mar 7 03:38:22.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-7243" for this suite. 03/07/23 03:38:22.616 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:412 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:38:22.622 +Mar 7 03:38:22.622: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:38:22.623 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:22.637 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:22.639 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:38:22.653 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:38:23.131 +STEP: Deploying the webhook pod 03/07/23 03:38:23.179 +STEP: Wait for the deployment to be ready 03/07/23 03:38:23.188 +Mar 7 03:38:23.194: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 03:38:25.202 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:38:25.216 +Mar 7 03:38:26.217: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:412 +STEP: Creating a validating webhook configuration 03/07/23 03:38:26.22 +STEP: Creating a configMap that does not comply to the validation webhook rules 03/07/23 03:38:27.256 +STEP: Updating a validating webhook configuration's rules to not include the create operation 03/07/23 03:38:27.263 +STEP: Creating a configMap that does not comply to the validation webhook rules 03/07/23 03:38:27.271 +STEP: Patching a validating webhook configuration's rules to include the create operation 03/07/23 03:38:27.279 +STEP: Creating a configMap that does not comply to the validation webhook rules 03/07/23 03:38:27.285 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:38:27.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8097" for this suite. 03/07/23 03:38:27.296 +STEP: Destroying namespace "webhook-8097-markers" for this suite. 03/07/23 03:38:27.302 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","completed":209,"skipped":3559,"failed":0} +------------------------------ +• [4.723 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:412 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:38:22.622 + Mar 7 03:38:22.622: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:38:22.623 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:22.637 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:22.639 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:38:22.653 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:38:23.131 + STEP: Deploying the webhook pod 03/07/23 03:38:23.179 + STEP: Wait for the deployment to be ready 03/07/23 03:38:23.188 + Mar 7 03:38:23.194: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 03:38:25.202 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:38:25.216 + Mar 7 03:38:26.217: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:412 + STEP: Creating a validating webhook configuration 03/07/23 03:38:26.22 + STEP: Creating a configMap that does not comply to the validation webhook rules 03/07/23 03:38:27.256 + STEP: Updating a validating webhook configuration's rules to not include the create operation 03/07/23 03:38:27.263 + STEP: Creating a configMap that does not comply to the validation webhook rules 03/07/23 03:38:27.271 + STEP: Patching a validating webhook configuration's rules to include the create operation 03/07/23 03:38:27.279 + STEP: Creating a configMap that does not comply to the validation webhook rules 03/07/23 03:38:27.285 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:38:27.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-8097" for this suite. 03/07/23 03:38:27.296 + STEP: Destroying namespace "webhook-8097-markers" for this suite. 03/07/23 03:38:27.302 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 +[BeforeEach] version v1 + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:38:27.346 +Mar 7 03:38:27.346: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename proxy 03/07/23 03:38:27.348 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:27.378 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:27.381 +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 +Mar 7 03:38:27.385: INFO: Creating pod... +Mar 7 03:38:27.405: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-6137" to be "running" +Mar 7 03:38:27.410: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 4.688808ms +Mar 7 03:38:29.413: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.007707228s +Mar 7 03:38:29.413: INFO: Pod "agnhost" satisfied condition "running" +Mar 7 03:38:29.413: INFO: Creating service... +Mar 7 03:38:29.428: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/DELETE +Mar 7 03:38:29.433: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Mar 7 03:38:29.433: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/GET +Mar 7 03:38:29.436: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Mar 7 03:38:29.436: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/HEAD +Mar 7 03:38:29.439: INFO: http.Client request:HEAD | StatusCode:200 +Mar 7 03:38:29.439: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/OPTIONS +Mar 7 03:38:29.445: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Mar 7 03:38:29.445: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/PATCH +Mar 7 03:38:29.450: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Mar 7 03:38:29.450: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/POST +Mar 7 03:38:29.456: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Mar 7 03:38:29.456: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/PUT +Mar 7 03:38:29.459: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Mar 7 03:38:29.459: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/DELETE +Mar 7 03:38:29.466: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Mar 7 03:38:29.466: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/GET +Mar 7 03:38:29.471: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Mar 7 03:38:29.471: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/HEAD +Mar 7 03:38:29.475: INFO: http.Client request:HEAD | StatusCode:200 +Mar 7 03:38:29.475: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/OPTIONS +Mar 7 03:38:29.481: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Mar 7 03:38:29.481: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/PATCH +Mar 7 03:38:29.495: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Mar 7 03:38:29.495: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/POST +Mar 7 03:38:29.499: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Mar 7 03:38:29.499: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/PUT +Mar 7 03:38:29.503: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + test/e2e/framework/framework.go:187 +Mar 7 03:38:29.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-6137" for this suite. 03/07/23 03:38:29.51 +{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","completed":210,"skipped":3573,"failed":0} +------------------------------ +• [2.169 seconds] +[sig-network] Proxy +test/e2e/network/common/framework.go:23 + version v1 + test/e2e/network/proxy.go:74 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] version v1 + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:38:27.346 + Mar 7 03:38:27.346: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename proxy 03/07/23 03:38:27.348 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:27.378 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:27.381 + [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 + Mar 7 03:38:27.385: INFO: Creating pod... + Mar 7 03:38:27.405: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-6137" to be "running" + Mar 7 03:38:27.410: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 4.688808ms + Mar 7 03:38:29.413: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.007707228s + Mar 7 03:38:29.413: INFO: Pod "agnhost" satisfied condition "running" + Mar 7 03:38:29.413: INFO: Creating service... + Mar 7 03:38:29.428: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/DELETE + Mar 7 03:38:29.433: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Mar 7 03:38:29.433: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/GET + Mar 7 03:38:29.436: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET + Mar 7 03:38:29.436: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/HEAD + Mar 7 03:38:29.439: INFO: http.Client request:HEAD | StatusCode:200 + Mar 7 03:38:29.439: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/OPTIONS + Mar 7 03:38:29.445: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Mar 7 03:38:29.445: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/PATCH + Mar 7 03:38:29.450: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Mar 7 03:38:29.450: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/POST + Mar 7 03:38:29.456: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Mar 7 03:38:29.456: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/pods/agnhost/proxy/some/path/with/PUT + Mar 7 03:38:29.459: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Mar 7 03:38:29.459: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/DELETE + Mar 7 03:38:29.466: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Mar 7 03:38:29.466: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/GET + Mar 7 03:38:29.471: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET + Mar 7 03:38:29.471: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/HEAD + Mar 7 03:38:29.475: INFO: http.Client request:HEAD | StatusCode:200 + Mar 7 03:38:29.475: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/OPTIONS + Mar 7 03:38:29.481: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Mar 7 03:38:29.481: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/PATCH + Mar 7 03:38:29.495: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Mar 7 03:38:29.495: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/POST + Mar 7 03:38:29.499: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Mar 7 03:38:29.499: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-6137/services/test-service/proxy/some/path/with/PUT + Mar 7 03:38:29.503: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + [AfterEach] version v1 + test/e2e/framework/framework.go:187 + Mar 7 03:38:29.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "proxy-6137" for this suite. 03/07/23 03:38:29.51 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:38:29.516 +Mar 7 03:38:29.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename subpath 03/07/23 03:38:29.517 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:29.53 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:29.532 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 03/07/23 03:38:29.533 +[It] should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 +STEP: Creating pod pod-subpath-test-configmap-kgpl 03/07/23 03:38:29.541 +STEP: Creating a pod to test atomic-volume-subpath 03/07/23 03:38:29.541 +Mar 7 03:38:29.547: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kgpl" in namespace "subpath-4991" to be "Succeeded or Failed" +Mar 7 03:38:29.549: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453618ms +Mar 7 03:38:31.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 2.006503981s +Mar 7 03:38:33.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 4.006508375s +Mar 7 03:38:35.552: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 6.005881206s +Mar 7 03:38:37.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 8.006390573s +Mar 7 03:38:39.552: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 10.005734205s +Mar 7 03:38:41.552: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 12.005595622s +Mar 7 03:38:43.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 14.006634652s +Mar 7 03:38:45.552: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 16.005565445s +Mar 7 03:38:47.554: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 18.007455755s +Mar 7 03:38:49.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 20.006210044s +Mar 7 03:38:51.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=false. Elapsed: 22.006666262s +Mar 7 03:38:53.554: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.007352716s +STEP: Saw pod success 03/07/23 03:38:53.554 +Mar 7 03:38:53.554: INFO: Pod "pod-subpath-test-configmap-kgpl" satisfied condition "Succeeded or Failed" +Mar 7 03:38:53.556: INFO: Trying to get logs from node node-2 pod pod-subpath-test-configmap-kgpl container test-container-subpath-configmap-kgpl: +STEP: delete the pod 03/07/23 03:38:53.561 +Mar 7 03:38:53.571: INFO: Waiting for pod pod-subpath-test-configmap-kgpl to disappear +Mar 7 03:38:53.573: INFO: Pod pod-subpath-test-configmap-kgpl no longer exists +STEP: Deleting pod pod-subpath-test-configmap-kgpl 03/07/23 03:38:53.573 +Mar 7 03:38:53.573: INFO: Deleting pod "pod-subpath-test-configmap-kgpl" in namespace "subpath-4991" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 +Mar 7 03:38:53.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4991" for this suite. 03/07/23 03:38:53.578 +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]","completed":211,"skipped":3593,"failed":0} +------------------------------ +• [SLOW TEST] [24.066 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:38:29.516 + Mar 7 03:38:29.516: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename subpath 03/07/23 03:38:29.517 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:29.53 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:29.532 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 03/07/23 03:38:29.533 + [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 + STEP: Creating pod pod-subpath-test-configmap-kgpl 03/07/23 03:38:29.541 + STEP: Creating a pod to test atomic-volume-subpath 03/07/23 03:38:29.541 + Mar 7 03:38:29.547: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kgpl" in namespace "subpath-4991" to be "Succeeded or Failed" + Mar 7 03:38:29.549: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453618ms + Mar 7 03:38:31.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 2.006503981s + Mar 7 03:38:33.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 4.006508375s + Mar 7 03:38:35.552: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 6.005881206s + Mar 7 03:38:37.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 8.006390573s + Mar 7 03:38:39.552: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 10.005734205s + Mar 7 03:38:41.552: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 12.005595622s + Mar 7 03:38:43.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 14.006634652s + Mar 7 03:38:45.552: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 16.005565445s + Mar 7 03:38:47.554: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 18.007455755s + Mar 7 03:38:49.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=true. Elapsed: 20.006210044s + Mar 7 03:38:51.553: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Running", Reason="", readiness=false. Elapsed: 22.006666262s + Mar 7 03:38:53.554: INFO: Pod "pod-subpath-test-configmap-kgpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.007352716s + STEP: Saw pod success 03/07/23 03:38:53.554 + Mar 7 03:38:53.554: INFO: Pod "pod-subpath-test-configmap-kgpl" satisfied condition "Succeeded or Failed" + Mar 7 03:38:53.556: INFO: Trying to get logs from node node-2 pod pod-subpath-test-configmap-kgpl container test-container-subpath-configmap-kgpl: + STEP: delete the pod 03/07/23 03:38:53.561 + Mar 7 03:38:53.571: INFO: Waiting for pod pod-subpath-test-configmap-kgpl to disappear + Mar 7 03:38:53.573: INFO: Pod pod-subpath-test-configmap-kgpl no longer exists + STEP: Deleting pod pod-subpath-test-configmap-kgpl 03/07/23 03:38:53.573 + Mar 7 03:38:53.573: INFO: Deleting pod "pod-subpath-test-configmap-kgpl" in namespace "subpath-4991" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 + Mar 7 03:38:53.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "subpath-4991" for this suite. 03/07/23 03:38:53.578 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 +[BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:38:53.583 +Mar 7 03:38:53.583: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pod-network-test 03/07/23 03:38:53.584 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:53.601 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:53.604 +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 +STEP: Performing setup for networking test in namespace pod-network-test-4655 03/07/23 03:38:53.606 +STEP: creating a selector 03/07/23 03:38:53.606 +STEP: Creating the service pods in kubernetes 03/07/23 03:38:53.606 +Mar 7 03:38:53.606: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Mar 7 03:38:53.642: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-4655" to be "running and ready" +Mar 7 03:38:53.657: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.986232ms +Mar 7 03:38:53.657: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:38:55.660: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.018176936s +Mar 7 03:38:55.660: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:38:57.661: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.019340988s +Mar 7 03:38:57.661: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:38:59.661: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.018487297s +Mar 7 03:38:59.661: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:39:01.660: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.017868447s +Mar 7 03:39:01.660: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:39:03.663: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.02052658s +Mar 7 03:39:03.663: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:39:05.660: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.017913457s +Mar 7 03:39:05.660: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Mar 7 03:39:05.660: INFO: Pod "netserver-0" satisfied condition "running and ready" +Mar 7 03:39:05.663: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-4655" to be "running and ready" +Mar 7 03:39:05.665: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.230937ms +Mar 7 03:39:05.665: INFO: The phase of Pod netserver-1 is Running (Ready = false) +Mar 7 03:39:07.669: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.006014137s +Mar 7 03:39:07.669: INFO: The phase of Pod netserver-1 is Running (Ready = false) +Mar 7 03:39:09.669: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.006212589s +Mar 7 03:39:09.669: INFO: The phase of Pod netserver-1 is Running (Ready = false) +Mar 7 03:39:11.668: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.005913489s +Mar 7 03:39:11.668: INFO: The phase of Pod netserver-1 is Running (Ready = false) +Mar 7 03:39:13.670: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.007459065s +Mar 7 03:39:13.670: INFO: The phase of Pod netserver-1 is Running (Ready = false) +Mar 7 03:39:15.670: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 10.007031178s +Mar 7 03:39:15.670: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Mar 7 03:39:15.670: INFO: Pod "netserver-1" satisfied condition "running and ready" +Mar 7 03:39:15.672: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-4655" to be "running and ready" +Mar 7 03:39:15.674: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.243328ms +Mar 7 03:39:15.674: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Mar 7 03:39:15.674: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 03/07/23 03:39:15.676 +Mar 7 03:39:15.679: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-4655" to be "running" +Mar 7 03:39:15.682: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360057ms +Mar 7 03:39:17.686: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006178431s +Mar 7 03:39:17.686: INFO: Pod "test-container-pod" satisfied condition "running" +Mar 7 03:39:17.688: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Mar 7 03:39:17.688: INFO: Breadth first check of 10.233.132.127 on host 192.168.1.100... +Mar 7 03:39:17.690: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.38:9080/dial?request=hostname&protocol=udp&host=10.233.132.127&port=8081&tries=1'] Namespace:pod-network-test-4655 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:39:17.690: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:39:17.690: INFO: ExecWithOptions: Clientset creation +Mar 7 03:39:17.690: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4655/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.38%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.233.132.127%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Mar 7 03:39:17.749: INFO: Waiting for responses: map[] +Mar 7 03:39:17.749: INFO: reached 10.233.132.127 after 0/1 tries +Mar 7 03:39:17.749: INFO: Breadth first check of 10.233.84.163 on host 192.168.1.101... +Mar 7 03:39:17.756: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.38:9080/dial?request=hostname&protocol=udp&host=10.233.84.163&port=8081&tries=1'] Namespace:pod-network-test-4655 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:39:17.756: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:39:17.757: INFO: ExecWithOptions: Clientset creation +Mar 7 03:39:17.757: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4655/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.38%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.233.84.163%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Mar 7 03:39:17.805: INFO: Waiting for responses: map[] +Mar 7 03:39:17.805: INFO: reached 10.233.84.163 after 0/1 tries +Mar 7 03:39:17.805: INFO: Breadth first check of 10.233.247.50 on host 192.168.1.102... +Mar 7 03:39:17.808: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.38:9080/dial?request=hostname&protocol=udp&host=10.233.247.50&port=8081&tries=1'] Namespace:pod-network-test-4655 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:39:17.808: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:39:17.808: INFO: ExecWithOptions: Clientset creation +Mar 7 03:39:17.808: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4655/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.38%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.233.247.50%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Mar 7 03:39:17.863: INFO: Waiting for responses: map[] +Mar 7 03:39:17.863: INFO: reached 10.233.247.50 after 0/1 tries +Mar 7 03:39:17.863: INFO: Going to retry 0 out of 3 pods.... +[AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:187 +Mar 7 03:39:17.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-4655" for this suite. 03/07/23 03:39:17.867 +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","completed":212,"skipped":3596,"failed":0} +------------------------------ +• [SLOW TEST] [24.288 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:38:53.583 + Mar 7 03:38:53.583: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pod-network-test 03/07/23 03:38:53.584 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:38:53.601 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:38:53.604 + [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 + STEP: Performing setup for networking test in namespace pod-network-test-4655 03/07/23 03:38:53.606 + STEP: creating a selector 03/07/23 03:38:53.606 + STEP: Creating the service pods in kubernetes 03/07/23 03:38:53.606 + Mar 7 03:38:53.606: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Mar 7 03:38:53.642: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-4655" to be "running and ready" + Mar 7 03:38:53.657: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.986232ms + Mar 7 03:38:53.657: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:38:55.660: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.018176936s + Mar 7 03:38:55.660: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:38:57.661: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.019340988s + Mar 7 03:38:57.661: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:38:59.661: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.018487297s + Mar 7 03:38:59.661: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:39:01.660: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.017868447s + Mar 7 03:39:01.660: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:39:03.663: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.02052658s + Mar 7 03:39:03.663: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:39:05.660: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.017913457s + Mar 7 03:39:05.660: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Mar 7 03:39:05.660: INFO: Pod "netserver-0" satisfied condition "running and ready" + Mar 7 03:39:05.663: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-4655" to be "running and ready" + Mar 7 03:39:05.665: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.230937ms + Mar 7 03:39:05.665: INFO: The phase of Pod netserver-1 is Running (Ready = false) + Mar 7 03:39:07.669: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.006014137s + Mar 7 03:39:07.669: INFO: The phase of Pod netserver-1 is Running (Ready = false) + Mar 7 03:39:09.669: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.006212589s + Mar 7 03:39:09.669: INFO: The phase of Pod netserver-1 is Running (Ready = false) + Mar 7 03:39:11.668: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.005913489s + Mar 7 03:39:11.668: INFO: The phase of Pod netserver-1 is Running (Ready = false) + Mar 7 03:39:13.670: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 8.007459065s + Mar 7 03:39:13.670: INFO: The phase of Pod netserver-1 is Running (Ready = false) + Mar 7 03:39:15.670: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 10.007031178s + Mar 7 03:39:15.670: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Mar 7 03:39:15.670: INFO: Pod "netserver-1" satisfied condition "running and ready" + Mar 7 03:39:15.672: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-4655" to be "running and ready" + Mar 7 03:39:15.674: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.243328ms + Mar 7 03:39:15.674: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Mar 7 03:39:15.674: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 03/07/23 03:39:15.676 + Mar 7 03:39:15.679: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-4655" to be "running" + Mar 7 03:39:15.682: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360057ms + Mar 7 03:39:17.686: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006178431s + Mar 7 03:39:17.686: INFO: Pod "test-container-pod" satisfied condition "running" + Mar 7 03:39:17.688: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Mar 7 03:39:17.688: INFO: Breadth first check of 10.233.132.127 on host 192.168.1.100... + Mar 7 03:39:17.690: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.38:9080/dial?request=hostname&protocol=udp&host=10.233.132.127&port=8081&tries=1'] Namespace:pod-network-test-4655 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:39:17.690: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:39:17.690: INFO: ExecWithOptions: Clientset creation + Mar 7 03:39:17.690: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4655/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.38%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.233.132.127%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Mar 7 03:39:17.749: INFO: Waiting for responses: map[] + Mar 7 03:39:17.749: INFO: reached 10.233.132.127 after 0/1 tries + Mar 7 03:39:17.749: INFO: Breadth first check of 10.233.84.163 on host 192.168.1.101... + Mar 7 03:39:17.756: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.38:9080/dial?request=hostname&protocol=udp&host=10.233.84.163&port=8081&tries=1'] Namespace:pod-network-test-4655 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:39:17.756: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:39:17.757: INFO: ExecWithOptions: Clientset creation + Mar 7 03:39:17.757: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4655/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.38%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.233.84.163%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Mar 7 03:39:17.805: INFO: Waiting for responses: map[] + Mar 7 03:39:17.805: INFO: reached 10.233.84.163 after 0/1 tries + Mar 7 03:39:17.805: INFO: Breadth first check of 10.233.247.50 on host 192.168.1.102... + Mar 7 03:39:17.808: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.233.247.38:9080/dial?request=hostname&protocol=udp&host=10.233.247.50&port=8081&tries=1'] Namespace:pod-network-test-4655 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:39:17.808: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:39:17.808: INFO: ExecWithOptions: Clientset creation + Mar 7 03:39:17.808: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4655/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.233.247.38%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.233.247.50%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Mar 7 03:39:17.863: INFO: Waiting for responses: map[] + Mar 7 03:39:17.863: INFO: reached 10.233.247.50 after 0/1 tries + Mar 7 03:39:17.863: INFO: Going to retry 0 out of 3 pods.... + [AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:187 + Mar 7 03:39:17.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pod-network-test-4655" for this suite. 03/07/23 03:39:17.867 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2204 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:39:17.872 +Mar 7 03:39:17.872: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:39:17.873 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:39:17.885 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:39:17.887 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2204 +STEP: creating service in namespace services-6397 03/07/23 03:39:17.889 +STEP: creating service affinity-nodeport in namespace services-6397 03/07/23 03:39:17.889 +STEP: creating replication controller affinity-nodeport in namespace services-6397 03/07/23 03:39:17.905 +I0307 03:39:17.912507 22 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-6397, replica count: 3 +I0307 03:39:20.963743 22 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 03:39:20.971: INFO: Creating new exec pod +Mar 7 03:39:20.994: INFO: Waiting up to 5m0s for pod "execpod-affinityx69pp" in namespace "services-6397" to be "running" +Mar 7 03:39:20.998: INFO: Pod "execpod-affinityx69pp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09857ms +Mar 7 03:39:23.001: INFO: Pod "execpod-affinityx69pp": Phase="Running", Reason="", readiness=true. Elapsed: 2.006306476s +Mar 7 03:39:23.001: INFO: Pod "execpod-affinityx69pp" satisfied condition "running" +Mar 7 03:39:24.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6397 exec execpod-affinityx69pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Mar 7 03:39:24.206: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Mar 7 03:39:24.206: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:39:24.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6397 exec execpod-affinityx69pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.110.207.153 80' +Mar 7 03:39:24.390: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.110.207.153 80\nConnection to 10.110.207.153 80 port [tcp/http] succeeded!\n" +Mar 7 03:39:24.390: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:39:24.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6397 exec execpod-affinityx69pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.102 32671' +Mar 7 03:39:24.579: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.102 32671\nConnection to 192.168.1.102 32671 port [tcp/*] succeeded!\n" +Mar 7 03:39:24.579: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:39:24.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6397 exec execpod-affinityx69pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.100 32671' +Mar 7 03:39:24.759: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 192.168.1.100 32671\nConnection to 192.168.1.100 32671 port [tcp/*] succeeded!\n" +Mar 7 03:39:24.759: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:39:24.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6397 exec execpod-affinityx69pp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://192.168.1.100:32671/ ; done' +Mar 7 03:39:25.002: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n" +Mar 7 03:39:25.002: INFO: stdout: "\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx" +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx +Mar 7 03:39:25.002: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-6397, will wait for the garbage collector to delete the pods 03/07/23 03:39:25.013 +Mar 7 03:39:25.074: INFO: Deleting ReplicationController affinity-nodeport took: 5.115004ms +Mar 7 03:39:25.175: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.959996ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:39:27.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6397" for this suite. 03/07/23 03:39:27.208 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","completed":213,"skipped":3603,"failed":0} +------------------------------ +• [SLOW TEST] [9.343 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2204 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:39:17.872 + Mar 7 03:39:17.872: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:39:17.873 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:39:17.885 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:39:17.887 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2204 + STEP: creating service in namespace services-6397 03/07/23 03:39:17.889 + STEP: creating service affinity-nodeport in namespace services-6397 03/07/23 03:39:17.889 + STEP: creating replication controller affinity-nodeport in namespace services-6397 03/07/23 03:39:17.905 + I0307 03:39:17.912507 22 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-6397, replica count: 3 + I0307 03:39:20.963743 22 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 03:39:20.971: INFO: Creating new exec pod + Mar 7 03:39:20.994: INFO: Waiting up to 5m0s for pod "execpod-affinityx69pp" in namespace "services-6397" to be "running" + Mar 7 03:39:20.998: INFO: Pod "execpod-affinityx69pp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09857ms + Mar 7 03:39:23.001: INFO: Pod "execpod-affinityx69pp": Phase="Running", Reason="", readiness=true. Elapsed: 2.006306476s + Mar 7 03:39:23.001: INFO: Pod "execpod-affinityx69pp" satisfied condition "running" + Mar 7 03:39:24.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6397 exec execpod-affinityx69pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' + Mar 7 03:39:24.206: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" + Mar 7 03:39:24.206: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:39:24.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6397 exec execpod-affinityx69pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.110.207.153 80' + Mar 7 03:39:24.390: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.110.207.153 80\nConnection to 10.110.207.153 80 port [tcp/http] succeeded!\n" + Mar 7 03:39:24.390: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:39:24.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6397 exec execpod-affinityx69pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.102 32671' + Mar 7 03:39:24.579: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.102 32671\nConnection to 192.168.1.102 32671 port [tcp/*] succeeded!\n" + Mar 7 03:39:24.579: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:39:24.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6397 exec execpod-affinityx69pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.100 32671' + Mar 7 03:39:24.759: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 192.168.1.100 32671\nConnection to 192.168.1.100 32671 port [tcp/*] succeeded!\n" + Mar 7 03:39:24.759: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:39:24.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-6397 exec execpod-affinityx69pp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://192.168.1.100:32671/ ; done' + Mar 7 03:39:25.002: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:32671/\n" + Mar 7 03:39:25.002: INFO: stdout: "\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx\naffinity-nodeport-4tqzx" + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Received response from host: affinity-nodeport-4tqzx + Mar 7 03:39:25.002: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-nodeport in namespace services-6397, will wait for the garbage collector to delete the pods 03/07/23 03:39:25.013 + Mar 7 03:39:25.074: INFO: Deleting ReplicationController affinity-nodeport took: 5.115004ms + Mar 7 03:39:25.175: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.959996ms + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:39:27.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-6397" for this suite. 03/07/23 03:39:27.208 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:173 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:39:27.215 +Mar 7 03:39:27.215: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:39:27.217 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:39:27.234 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:39:27.237 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:173 +STEP: Creating configMap with name cm-test-opt-del-70423d44-48f3-41de-af16-fe249a626b8c 03/07/23 03:39:27.244 +STEP: Creating configMap with name cm-test-opt-upd-98677736-14c3-459c-8f73-e263c1c66f18 03/07/23 03:39:27.251 +STEP: Creating the pod 03/07/23 03:39:27.258 +Mar 7 03:39:27.265: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195" in namespace "projected-874" to be "running and ready" +Mar 7 03:39:27.268: INFO: Pod "pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195": Phase="Pending", Reason="", readiness=false. Elapsed: 3.200771ms +Mar 7 03:39:27.268: INFO: The phase of Pod pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:39:29.271: INFO: Pod "pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195": Phase="Running", Reason="", readiness=true. Elapsed: 2.006569397s +Mar 7 03:39:29.271: INFO: The phase of Pod pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195 is Running (Ready = true) +Mar 7 03:39:29.271: INFO: Pod "pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195" satisfied condition "running and ready" +STEP: Deleting configmap cm-test-opt-del-70423d44-48f3-41de-af16-fe249a626b8c 03/07/23 03:39:29.288 +STEP: Updating configmap cm-test-opt-upd-98677736-14c3-459c-8f73-e263c1c66f18 03/07/23 03:39:29.319 +STEP: Creating configMap with name cm-test-opt-create-f331aafa-fecd-498d-aa44-d432c5001dc8 03/07/23 03:39:29.322 +STEP: waiting to observe update in volume 03/07/23 03:39:29.336 +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +Mar 7 03:39:31.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-874" for this suite. 03/07/23 03:39:31.361 +{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","completed":214,"skipped":3613,"failed":0} +------------------------------ +• [4.172 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:173 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:39:27.215 + Mar 7 03:39:27.215: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:39:27.217 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:39:27.234 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:39:27.237 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:173 + STEP: Creating configMap with name cm-test-opt-del-70423d44-48f3-41de-af16-fe249a626b8c 03/07/23 03:39:27.244 + STEP: Creating configMap with name cm-test-opt-upd-98677736-14c3-459c-8f73-e263c1c66f18 03/07/23 03:39:27.251 + STEP: Creating the pod 03/07/23 03:39:27.258 + Mar 7 03:39:27.265: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195" in namespace "projected-874" to be "running and ready" + Mar 7 03:39:27.268: INFO: Pod "pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195": Phase="Pending", Reason="", readiness=false. Elapsed: 3.200771ms + Mar 7 03:39:27.268: INFO: The phase of Pod pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:39:29.271: INFO: Pod "pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195": Phase="Running", Reason="", readiness=true. Elapsed: 2.006569397s + Mar 7 03:39:29.271: INFO: The phase of Pod pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195 is Running (Ready = true) + Mar 7 03:39:29.271: INFO: Pod "pod-projected-configmaps-a5f80b8b-8ddf-4523-ac96-706436844195" satisfied condition "running and ready" + STEP: Deleting configmap cm-test-opt-del-70423d44-48f3-41de-af16-fe249a626b8c 03/07/23 03:39:29.288 + STEP: Updating configmap cm-test-opt-upd-98677736-14c3-459c-8f73-e263c1c66f18 03/07/23 03:39:29.319 + STEP: Creating configMap with name cm-test-opt-create-f331aafa-fecd-498d-aa44-d432c5001dc8 03/07/23 03:39:29.322 + STEP: waiting to observe update in volume 03/07/23 03:39:29.336 + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 + Mar 7 03:39:31.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-874" for this suite. 03/07/23 03:39:31.361 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:39:31.389 +Mar 7 03:39:31.389: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replicaset 03/07/23 03:39:31.39 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:39:31.403 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:39:31.406 +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 +STEP: Given a Pod with a 'name' label pod-adoption-release is created 03/07/23 03:39:31.408 +Mar 7 03:39:31.414: INFO: Waiting up to 5m0s for pod "pod-adoption-release" in namespace "replicaset-3621" to be "running and ready" +Mar 7 03:39:31.417: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 2.824424ms +Mar 7 03:39:31.417: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:39:33.420: INFO: Pod "pod-adoption-release": Phase="Running", Reason="", readiness=true. Elapsed: 2.006198192s +Mar 7 03:39:33.420: INFO: The phase of Pod pod-adoption-release is Running (Ready = true) +Mar 7 03:39:33.420: INFO: Pod "pod-adoption-release" satisfied condition "running and ready" +STEP: When a replicaset with a matching selector is created 03/07/23 03:39:33.422 +STEP: Then the orphan pod is adopted 03/07/23 03:39:33.426 +STEP: When the matched label of one of its pods change 03/07/23 03:39:34.431 +Mar 7 03:39:34.434: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released 03/07/23 03:39:34.442 +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +Mar 7 03:39:35.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3621" for this suite. 03/07/23 03:39:35.451 +{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","completed":215,"skipped":3634,"failed":0} +------------------------------ +• [4.066 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:39:31.389 + Mar 7 03:39:31.389: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replicaset 03/07/23 03:39:31.39 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:39:31.403 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:39:31.406 + [It] should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 + STEP: Given a Pod with a 'name' label pod-adoption-release is created 03/07/23 03:39:31.408 + Mar 7 03:39:31.414: INFO: Waiting up to 5m0s for pod "pod-adoption-release" in namespace "replicaset-3621" to be "running and ready" + Mar 7 03:39:31.417: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 2.824424ms + Mar 7 03:39:31.417: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:39:33.420: INFO: Pod "pod-adoption-release": Phase="Running", Reason="", readiness=true. Elapsed: 2.006198192s + Mar 7 03:39:33.420: INFO: The phase of Pod pod-adoption-release is Running (Ready = true) + Mar 7 03:39:33.420: INFO: Pod "pod-adoption-release" satisfied condition "running and ready" + STEP: When a replicaset with a matching selector is created 03/07/23 03:39:33.422 + STEP: Then the orphan pod is adopted 03/07/23 03:39:33.426 + STEP: When the matched label of one of its pods change 03/07/23 03:39:34.431 + Mar 7 03:39:34.434: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 + STEP: Then the pod is released 03/07/23 03:39:34.442 + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 + Mar 7 03:39:35.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replicaset-3621" for this suite. 03/07/23 03:39:35.451 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:231 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:39:35.455 +Mar 7 03:39:35.455: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-runtime 03/07/23 03:39:35.457 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:39:35.469 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:39:35.471 +[It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:231 +STEP: create the container 03/07/23 03:39:35.474 +STEP: wait for the container to reach Succeeded 03/07/23 03:39:35.48 +STEP: get the container status 03/07/23 03:39:39.496 +STEP: the container should be terminated 03/07/23 03:39:39.506 +STEP: the termination message should be set 03/07/23 03:39:39.506 +Mar 7 03:39:39.506: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container 03/07/23 03:39:39.506 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 +Mar 7 03:39:39.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-2883" for this suite. 03/07/23 03:39:39.519 +{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","completed":216,"skipped":3634,"failed":0} +------------------------------ +• [4.069 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:43 + on terminated container + test/e2e/common/node/runtime.go:136 + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:231 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:39:35.455 + Mar 7 03:39:35.455: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-runtime 03/07/23 03:39:35.457 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:39:35.469 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:39:35.471 + [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:231 + STEP: create the container 03/07/23 03:39:35.474 + STEP: wait for the container to reach Succeeded 03/07/23 03:39:35.48 + STEP: get the container status 03/07/23 03:39:39.496 + STEP: the container should be terminated 03/07/23 03:39:39.506 + STEP: the termination message should be set 03/07/23 03:39:39.506 + Mar 7 03:39:39.506: INFO: Expected: &{} to match Container's Termination Message: -- + STEP: delete the container 03/07/23 03:39:39.506 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 + Mar 7 03:39:39.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-runtime-2883" for this suite. 03/07/23 03:39:39.519 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:308 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:39:39.525 +Mar 7 03:39:39.525: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:39:39.526 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:39:39.539 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:39:39.541 +[It] works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:308 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation 03/07/23 03:39:39.543 +Mar 7 03:39:39.543: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation 03/07/23 03:39:56.211 +Mar 7 03:39:56.211: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:40:01.203: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:40:19.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-2386" for this suite. 03/07/23 03:40:19.612 +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","completed":217,"skipped":3663,"failed":0} +------------------------------ +• [SLOW TEST] [40.105 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:308 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:39:39.525 + Mar 7 03:39:39.525: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:39:39.526 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:39:39.539 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:39:39.541 + [It] works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:308 + STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation 03/07/23 03:39:39.543 + Mar 7 03:39:39.543: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation 03/07/23 03:39:56.211 + Mar 7 03:39:56.211: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:40:01.203: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:40:19.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-publish-openapi-2386" for this suite. 03/07/23 03:40:19.612 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:272 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:40:19.632 +Mar 7 03:40:19.632: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename svcaccounts 03/07/23 03:40:19.633 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:40:19.646 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:40:19.65 +[It] should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:272 +STEP: Creating a pod to test service account token: 03/07/23 03:40:19.652 +Mar 7 03:40:19.660: INFO: Waiting up to 5m0s for pod "test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613" in namespace "svcaccounts-6038" to be "Succeeded or Failed" +Mar 7 03:40:19.662: INFO: Pod "test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613": Phase="Pending", Reason="", readiness=false. Elapsed: 1.963935ms +Mar 7 03:40:21.665: INFO: Pod "test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005554808s +Mar 7 03:40:23.666: INFO: Pod "test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005933015s +STEP: Saw pod success 03/07/23 03:40:23.666 +Mar 7 03:40:23.666: INFO: Pod "test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613" satisfied condition "Succeeded or Failed" +Mar 7 03:40:23.668: INFO: Trying to get logs from node node-2 pod test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613 container agnhost-container: +STEP: delete the pod 03/07/23 03:40:23.674 +Mar 7 03:40:23.690: INFO: Waiting for pod test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613 to disappear +Mar 7 03:40:23.692: INFO: Pod test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613 no longer exists +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +Mar 7 03:40:23.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-6038" for this suite. 03/07/23 03:40:23.696 +{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","completed":218,"skipped":3694,"failed":0} +------------------------------ +• [4.070 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:272 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:40:19.632 + Mar 7 03:40:19.632: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename svcaccounts 03/07/23 03:40:19.633 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:40:19.646 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:40:19.65 + [It] should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:272 + STEP: Creating a pod to test service account token: 03/07/23 03:40:19.652 + Mar 7 03:40:19.660: INFO: Waiting up to 5m0s for pod "test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613" in namespace "svcaccounts-6038" to be "Succeeded or Failed" + Mar 7 03:40:19.662: INFO: Pod "test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613": Phase="Pending", Reason="", readiness=false. Elapsed: 1.963935ms + Mar 7 03:40:21.665: INFO: Pod "test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005554808s + Mar 7 03:40:23.666: INFO: Pod "test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005933015s + STEP: Saw pod success 03/07/23 03:40:23.666 + Mar 7 03:40:23.666: INFO: Pod "test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613" satisfied condition "Succeeded or Failed" + Mar 7 03:40:23.668: INFO: Trying to get logs from node node-2 pod test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613 container agnhost-container: + STEP: delete the pod 03/07/23 03:40:23.674 + Mar 7 03:40:23.690: INFO: Waiting for pod test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613 to disappear + Mar 7 03:40:23.692: INFO: Pod test-pod-fab85b7b-6efb-450f-83cf-bfaf7cc4f613 no longer exists + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 + Mar 7 03:40:23.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "svcaccounts-6038" for this suite. 03/07/23 03:40:23.696 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:438 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:40:23.703 +Mar 7 03:40:23.703: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 03:40:23.704 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:40:23.72 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:40:23.723 +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:438 +STEP: Counting existing ResourceQuota 03/07/23 03:40:23.724 +STEP: Creating a ResourceQuota 03/07/23 03:40:28.731 +STEP: Ensuring resource quota status is calculated 03/07/23 03:40:28.756 +STEP: Creating a ReplicaSet 03/07/23 03:40:30.759 +STEP: Ensuring resource quota status captures replicaset creation 03/07/23 03:40:30.768 +STEP: Deleting a ReplicaSet 03/07/23 03:40:32.771 +STEP: Ensuring resource quota status released usage 03/07/23 03:40:32.795 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 03:40:34.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7708" for this suite. 03/07/23 03:40:34.802 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","completed":219,"skipped":3712,"failed":0} +------------------------------ +• [SLOW TEST] [11.144 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:438 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:40:23.703 + Mar 7 03:40:23.703: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 03:40:23.704 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:40:23.72 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:40:23.723 + [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:438 + STEP: Counting existing ResourceQuota 03/07/23 03:40:23.724 + STEP: Creating a ResourceQuota 03/07/23 03:40:28.731 + STEP: Ensuring resource quota status is calculated 03/07/23 03:40:28.756 + STEP: Creating a ReplicaSet 03/07/23 03:40:30.759 + STEP: Ensuring resource quota status captures replicaset creation 03/07/23 03:40:30.768 + STEP: Deleting a ReplicaSet 03/07/23 03:40:32.771 + STEP: Ensuring resource quota status released usage 03/07/23 03:40:32.795 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 03:40:34.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-7708" for this suite. 03/07/23 03:40:34.802 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:77 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:40:34.848 +Mar 7 03:40:34.848: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:40:34.849 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:40:34.864 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:40:34.866 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:77 +STEP: Creating projection with secret that has name projected-secret-test-map-1a1bd162-59cd-46d8-8bca-b8aa4e371e80 03/07/23 03:40:34.867 +STEP: Creating a pod to test consume secrets 03/07/23 03:40:34.872 +Mar 7 03:40:34.878: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9" in namespace "projected-9248" to be "Succeeded or Failed" +Mar 7 03:40:34.882: INFO: Pod "pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066073ms +Mar 7 03:40:36.886: INFO: Pod "pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00773527s +Mar 7 03:40:38.890: INFO: Pod "pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011448005s +STEP: Saw pod success 03/07/23 03:40:38.89 +Mar 7 03:40:38.890: INFO: Pod "pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9" satisfied condition "Succeeded or Failed" +Mar 7 03:40:38.893: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9 container projected-secret-volume-test: +STEP: delete the pod 03/07/23 03:40:38.897 +Mar 7 03:40:38.926: INFO: Waiting for pod pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9 to disappear +Mar 7 03:40:38.929: INFO: Pod pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +Mar 7 03:40:38.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9248" for this suite. 03/07/23 03:40:38.931 +{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","completed":220,"skipped":3732,"failed":0} +------------------------------ +• [4.088 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:77 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:40:34.848 + Mar 7 03:40:34.848: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:40:34.849 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:40:34.864 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:40:34.866 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:77 + STEP: Creating projection with secret that has name projected-secret-test-map-1a1bd162-59cd-46d8-8bca-b8aa4e371e80 03/07/23 03:40:34.867 + STEP: Creating a pod to test consume secrets 03/07/23 03:40:34.872 + Mar 7 03:40:34.878: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9" in namespace "projected-9248" to be "Succeeded or Failed" + Mar 7 03:40:34.882: INFO: Pod "pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066073ms + Mar 7 03:40:36.886: INFO: Pod "pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00773527s + Mar 7 03:40:38.890: INFO: Pod "pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011448005s + STEP: Saw pod success 03/07/23 03:40:38.89 + Mar 7 03:40:38.890: INFO: Pod "pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9" satisfied condition "Succeeded or Failed" + Mar 7 03:40:38.893: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9 container projected-secret-volume-test: + STEP: delete the pod 03/07/23 03:40:38.897 + Mar 7 03:40:38.926: INFO: Waiting for pod pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9 to disappear + Mar 7 03:40:38.929: INFO: Pod pod-projected-secrets-a07cc5c5-b222-4fac-b0e0-8748601da2c9 no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 + Mar 7 03:40:38.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-9248" for this suite. 03/07/23 03:40:38.931 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:40:38.938 +Mar 7 03:40:38.938: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replicaset 03/07/23 03:40:38.939 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:40:38.952 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:40:38.954 +[It] should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 +STEP: Create a Replicaset 03/07/23 03:40:38.959 +STEP: Verify that the required pods have come up. 03/07/23 03:40:38.963 +Mar 7 03:40:38.966: INFO: Pod name sample-pod: Found 0 pods out of 1 +Mar 7 03:40:43.969: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 03/07/23 03:40:43.969 +STEP: Getting /status 03/07/23 03:40:43.969 +Mar 7 03:40:43.972: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status 03/07/23 03:40:43.972 +Mar 7 03:40:43.979: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated 03/07/23 03:40:43.979 +Mar 7 03:40:43.981: INFO: Observed &ReplicaSet event: ADDED +Mar 7 03:40:43.981: INFO: Observed &ReplicaSet event: MODIFIED +Mar 7 03:40:43.981: INFO: Observed &ReplicaSet event: MODIFIED +Mar 7 03:40:43.981: INFO: Observed &ReplicaSet event: MODIFIED +Mar 7 03:40:43.981: INFO: Found replicaset test-rs in namespace replicaset-133 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Mar 7 03:40:43.981: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status 03/07/23 03:40:43.981 +Mar 7 03:40:43.981: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Mar 7 03:40:43.986: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched 03/07/23 03:40:43.986 +Mar 7 03:40:43.996: INFO: Observed &ReplicaSet event: ADDED +Mar 7 03:40:43.996: INFO: Observed &ReplicaSet event: MODIFIED +Mar 7 03:40:43.996: INFO: Observed &ReplicaSet event: MODIFIED +Mar 7 03:40:43.996: INFO: Observed &ReplicaSet event: MODIFIED +Mar 7 03:40:43.996: INFO: Observed replicaset test-rs in namespace replicaset-133 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Mar 7 03:40:43.996: INFO: Observed &ReplicaSet event: MODIFIED +Mar 7 03:40:43.996: INFO: Found replicaset test-rs in namespace replicaset-133 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Mar 7 03:40:43.996: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +Mar 7 03:40:43.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-133" for this suite. 03/07/23 03:40:44 +{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","completed":221,"skipped":3747,"failed":0} +------------------------------ +• [SLOW TEST] [5.069 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:40:38.938 + Mar 7 03:40:38.938: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replicaset 03/07/23 03:40:38.939 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:40:38.952 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:40:38.954 + [It] should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 + STEP: Create a Replicaset 03/07/23 03:40:38.959 + STEP: Verify that the required pods have come up. 03/07/23 03:40:38.963 + Mar 7 03:40:38.966: INFO: Pod name sample-pod: Found 0 pods out of 1 + Mar 7 03:40:43.969: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 03/07/23 03:40:43.969 + STEP: Getting /status 03/07/23 03:40:43.969 + Mar 7 03:40:43.972: INFO: Replicaset test-rs has Conditions: [] + STEP: updating the Replicaset Status 03/07/23 03:40:43.972 + Mar 7 03:40:43.979: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the ReplicaSet status to be updated 03/07/23 03:40:43.979 + Mar 7 03:40:43.981: INFO: Observed &ReplicaSet event: ADDED + Mar 7 03:40:43.981: INFO: Observed &ReplicaSet event: MODIFIED + Mar 7 03:40:43.981: INFO: Observed &ReplicaSet event: MODIFIED + Mar 7 03:40:43.981: INFO: Observed &ReplicaSet event: MODIFIED + Mar 7 03:40:43.981: INFO: Found replicaset test-rs in namespace replicaset-133 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Mar 7 03:40:43.981: INFO: Replicaset test-rs has an updated status + STEP: patching the Replicaset Status 03/07/23 03:40:43.981 + Mar 7 03:40:43.981: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Mar 7 03:40:43.986: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Replicaset status to be patched 03/07/23 03:40:43.986 + Mar 7 03:40:43.996: INFO: Observed &ReplicaSet event: ADDED + Mar 7 03:40:43.996: INFO: Observed &ReplicaSet event: MODIFIED + Mar 7 03:40:43.996: INFO: Observed &ReplicaSet event: MODIFIED + Mar 7 03:40:43.996: INFO: Observed &ReplicaSet event: MODIFIED + Mar 7 03:40:43.996: INFO: Observed replicaset test-rs in namespace replicaset-133 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Mar 7 03:40:43.996: INFO: Observed &ReplicaSet event: MODIFIED + Mar 7 03:40:43.996: INFO: Found replicaset test-rs in namespace replicaset-133 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } + Mar 7 03:40:43.996: INFO: Replicaset test-rs has a patched status + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 + Mar 7 03:40:43.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replicaset-133" for this suite. 03/07/23 03:40:44 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:906 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:40:44.008 +Mar 7 03:40:44.008: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename statefulset 03/07/23 03:40:44.009 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:40:44.022 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:40:44.026 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-7075 03/07/23 03:40:44.028 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:906 +Mar 7 03:40:44.042: INFO: Found 0 stateful pods, waiting for 1 +Mar 7 03:40:54.046: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet 03/07/23 03:40:54.051 +W0307 03:40:54.079738 22 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Mar 7 03:40:54.107: INFO: Found 1 stateful pods, waiting for 2 +Mar 7 03:41:04.110: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 03:41:04.110: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets 03/07/23 03:41:04.116 +STEP: Delete all of the StatefulSets 03/07/23 03:41:04.118 +STEP: Verify that StatefulSets have been deleted 03/07/23 03:41:04.143 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Mar 7 03:41:04.146: INFO: Deleting all statefulset in ns statefulset-7075 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +Mar 7 03:41:04.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7075" for this suite. 03/07/23 03:41:04.155 +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","completed":222,"skipped":3761,"failed":0} +------------------------------ +• [SLOW TEST] [20.151 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:906 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:40:44.008 + Mar 7 03:40:44.008: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename statefulset 03/07/23 03:40:44.009 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:40:44.022 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:40:44.026 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 + STEP: Creating service test in namespace statefulset-7075 03/07/23 03:40:44.028 + [It] should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:906 + Mar 7 03:40:44.042: INFO: Found 0 stateful pods, waiting for 1 + Mar 7 03:40:54.046: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: patching the StatefulSet 03/07/23 03:40:54.051 + W0307 03:40:54.079738 22 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Mar 7 03:40:54.107: INFO: Found 1 stateful pods, waiting for 2 + Mar 7 03:41:04.110: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 03:41:04.110: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true + STEP: Listing all StatefulSets 03/07/23 03:41:04.116 + STEP: Delete all of the StatefulSets 03/07/23 03:41:04.118 + STEP: Verify that StatefulSets have been deleted 03/07/23 03:41:04.143 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 + Mar 7 03:41:04.146: INFO: Deleting all statefulset in ns statefulset-7075 + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 + Mar 7 03:41:04.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "statefulset-7075" for this suite. 03/07/23 03:41:04.155 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:04.16 +Mar 7 03:41:04.160: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir-wrapper 03/07/23 03:41:04.162 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:04.175 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:04.178 +[It] should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 +Mar 7 03:41:04.197: INFO: Waiting up to 5m0s for pod "pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c" in namespace "emptydir-wrapper-6900" to be "running and ready" +Mar 7 03:41:04.199: INFO: Pod "pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.64676ms +Mar 7 03:41:04.199: INFO: The phase of Pod pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:41:06.203: INFO: Pod "pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c": Phase="Running", Reason="", readiness=true. Elapsed: 2.005938242s +Mar 7 03:41:06.203: INFO: The phase of Pod pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c is Running (Ready = true) +Mar 7 03:41:06.203: INFO: Pod "pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c" satisfied condition "running and ready" +STEP: Cleaning up the secret 03/07/23 03:41:06.205 +STEP: Cleaning up the configmap 03/07/23 03:41:06.21 +STEP: Cleaning up the pod 03/07/23 03:41:06.215 +[AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:187 +Mar 7 03:41:06.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-6900" for this suite. 03/07/23 03:41:06.247 +{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","completed":223,"skipped":3770,"failed":0} +------------------------------ +• [2.097 seconds] +[sig-storage] EmptyDir wrapper volumes +test/e2e/storage/utils/framework.go:23 + should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:04.16 + Mar 7 03:41:04.160: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir-wrapper 03/07/23 03:41:04.162 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:04.175 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:04.178 + [It] should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 + Mar 7 03:41:04.197: INFO: Waiting up to 5m0s for pod "pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c" in namespace "emptydir-wrapper-6900" to be "running and ready" + Mar 7 03:41:04.199: INFO: Pod "pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.64676ms + Mar 7 03:41:04.199: INFO: The phase of Pod pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:41:06.203: INFO: Pod "pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c": Phase="Running", Reason="", readiness=true. Elapsed: 2.005938242s + Mar 7 03:41:06.203: INFO: The phase of Pod pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c is Running (Ready = true) + Mar 7 03:41:06.203: INFO: Pod "pod-secrets-aeb7ca59-23e2-492e-aa84-e2a486d2f17c" satisfied condition "running and ready" + STEP: Cleaning up the secret 03/07/23 03:41:06.205 + STEP: Cleaning up the configmap 03/07/23 03:41:06.21 + STEP: Cleaning up the pod 03/07/23 03:41:06.215 + [AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/framework.go:187 + Mar 7 03:41:06.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-wrapper-6900" for this suite. 03/07/23 03:41:06.247 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:220 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:06.259 +Mar 7 03:41:06.259: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:41:06.259 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:06.278 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:06.28 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:41:06.295 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:41:06.708 +STEP: Deploying the webhook pod 03/07/23 03:41:06.714 +STEP: Wait for the deployment to be ready 03/07/23 03:41:06.721 +Mar 7 03:41:06.730: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 03:41:08.738 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:41:08.751 +Mar 7 03:41:09.752: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:220 +Mar 7 03:41:09.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Registering the custom resource webhook via the AdmissionRegistration API 03/07/23 03:41:10.276 +STEP: Creating a custom resource that should be denied by the webhook 03/07/23 03:41:10.29 +STEP: Creating a custom resource whose deletion would be denied by the webhook 03/07/23 03:41:12.323 +STEP: Updating the custom resource with disallowed data should be denied 03/07/23 03:41:12.33 +STEP: Deleting the custom resource should be denied 03/07/23 03:41:12.336 +STEP: Remove the offending key and value from the custom resource data 03/07/23 03:41:12.34 +STEP: Deleting the updated custom resource should be successful 03/07/23 03:41:12.346 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:41:12.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8069" for this suite. 03/07/23 03:41:12.868 +STEP: Destroying namespace "webhook-8069-markers" for this suite. 03/07/23 03:41:12.873 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","completed":224,"skipped":3777,"failed":0} +------------------------------ +• [SLOW TEST] [6.709 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:220 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:06.259 + Mar 7 03:41:06.259: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:41:06.259 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:06.278 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:06.28 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:41:06.295 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:41:06.708 + STEP: Deploying the webhook pod 03/07/23 03:41:06.714 + STEP: Wait for the deployment to be ready 03/07/23 03:41:06.721 + Mar 7 03:41:06.730: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 03:41:08.738 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:41:08.751 + Mar 7 03:41:09.752: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:220 + Mar 7 03:41:09.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Registering the custom resource webhook via the AdmissionRegistration API 03/07/23 03:41:10.276 + STEP: Creating a custom resource that should be denied by the webhook 03/07/23 03:41:10.29 + STEP: Creating a custom resource whose deletion would be denied by the webhook 03/07/23 03:41:12.323 + STEP: Updating the custom resource with disallowed data should be denied 03/07/23 03:41:12.33 + STEP: Deleting the custom resource should be denied 03/07/23 03:41:12.336 + STEP: Remove the offending key and value from the custom resource data 03/07/23 03:41:12.34 + STEP: Deleting the updated custom resource should be successful 03/07/23 03:41:12.346 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:41:12.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-8069" for this suite. 03/07/23 03:41:12.868 + STEP: Destroying namespace "webhook-8069-markers" for this suite. 03/07/23 03:41:12.873 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:254 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:12.968 +Mar 7 03:41:12.968: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename init-container 03/07/23 03:41:12.969 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:13.003 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:13.011 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 +[It] should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:254 +STEP: creating the pod 03/07/23 03:41:13.014 +Mar 7 03:41:13.014: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 +Mar 7 03:41:16.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-6836" for this suite. 03/07/23 03:41:16.119 +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","completed":225,"skipped":3777,"failed":0} +------------------------------ +• [3.157 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:254 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:12.968 + Mar 7 03:41:12.968: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename init-container 03/07/23 03:41:12.969 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:13.003 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:13.011 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 + [It] should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:254 + STEP: creating the pod 03/07/23 03:41:13.014 + Mar 7 03:41:13.014: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 + Mar 7 03:41:16.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "init-container-6836" for this suite. 03/07/23 03:41:16.119 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:57 +[BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:16.126 +Mar 7 03:41:16.126: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename limitrange 03/07/23 03:41:16.127 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:16.14 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:16.143 +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:57 +STEP: Creating a LimitRange 03/07/23 03:41:16.144 +STEP: Setting up watch 03/07/23 03:41:16.144 +STEP: Submitting a LimitRange 03/07/23 03:41:16.247 +STEP: Verifying LimitRange creation was observed 03/07/23 03:41:16.253 +STEP: Fetching the LimitRange to ensure it has proper values 03/07/23 03:41:16.253 +Mar 7 03:41:16.255: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Mar 7 03:41:16.255: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements 03/07/23 03:41:16.255 +STEP: Ensuring Pod has resource requirements applied from LimitRange 03/07/23 03:41:16.259 +Mar 7 03:41:16.261: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Mar 7 03:41:16.261: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements 03/07/23 03:41:16.261 +STEP: Ensuring Pod has merged resource requirements applied from LimitRange 03/07/23 03:41:16.266 +Mar 7 03:41:16.269: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Mar 7 03:41:16.269: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources 03/07/23 03:41:16.269 +STEP: Failing to create a Pod with more than max resources 03/07/23 03:41:16.271 +STEP: Updating a LimitRange 03/07/23 03:41:16.273 +STEP: Verifying LimitRange updating is effective 03/07/23 03:41:16.277 +STEP: Creating a Pod with less than former min resources 03/07/23 03:41:18.279 +STEP: Failing to create a Pod with more than max resources 03/07/23 03:41:18.284 +STEP: Deleting a LimitRange 03/07/23 03:41:18.286 +STEP: Verifying the LimitRange was deleted 03/07/23 03:41:18.293 +Mar 7 03:41:23.296: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources 03/07/23 03:41:23.296 +[AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/framework.go:187 +Mar 7 03:41:23.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-1438" for this suite. 03/07/23 03:41:23.308 +{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","completed":226,"skipped":3784,"failed":0} +------------------------------ +• [SLOW TEST] [7.187 seconds] +[sig-scheduling] LimitRange +test/e2e/scheduling/framework.go:40 + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:57 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:16.126 + Mar 7 03:41:16.126: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename limitrange 03/07/23 03:41:16.127 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:16.14 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:16.143 + [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:57 + STEP: Creating a LimitRange 03/07/23 03:41:16.144 + STEP: Setting up watch 03/07/23 03:41:16.144 + STEP: Submitting a LimitRange 03/07/23 03:41:16.247 + STEP: Verifying LimitRange creation was observed 03/07/23 03:41:16.253 + STEP: Fetching the LimitRange to ensure it has proper values 03/07/23 03:41:16.253 + Mar 7 03:41:16.255: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] + Mar 7 03:41:16.255: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Creating a Pod with no resource requirements 03/07/23 03:41:16.255 + STEP: Ensuring Pod has resource requirements applied from LimitRange 03/07/23 03:41:16.259 + Mar 7 03:41:16.261: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] + Mar 7 03:41:16.261: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Creating a Pod with partial resource requirements 03/07/23 03:41:16.261 + STEP: Ensuring Pod has merged resource requirements applied from LimitRange 03/07/23 03:41:16.266 + Mar 7 03:41:16.269: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] + Mar 7 03:41:16.269: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Failing to create a Pod with less than min resources 03/07/23 03:41:16.269 + STEP: Failing to create a Pod with more than max resources 03/07/23 03:41:16.271 + STEP: Updating a LimitRange 03/07/23 03:41:16.273 + STEP: Verifying LimitRange updating is effective 03/07/23 03:41:16.277 + STEP: Creating a Pod with less than former min resources 03/07/23 03:41:18.279 + STEP: Failing to create a Pod with more than max resources 03/07/23 03:41:18.284 + STEP: Deleting a LimitRange 03/07/23 03:41:18.286 + STEP: Verifying the LimitRange was deleted 03/07/23 03:41:18.293 + Mar 7 03:41:23.296: INFO: limitRange is already deleted + STEP: Creating a Pod with more than former max resources 03/07/23 03:41:23.296 + [AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/framework.go:187 + Mar 7 03:41:23.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "limitrange-1438" for this suite. 03/07/23 03:41:23.308 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:23.313 +Mar 7 03:41:23.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-webhook 03/07/23 03:41:23.315 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:23.329 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:23.331 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 +STEP: Setting up server cert 03/07/23 03:41:23.333 +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 03/07/23 03:41:23.748 +STEP: Deploying the custom resource conversion webhook pod 03/07/23 03:41:23.765 +STEP: Wait for the deployment to be ready 03/07/23 03:41:23.792 +Mar 7 03:41:23.805: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 03:41:25.812 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:41:25.827 +Mar 7 03:41:26.827: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 +Mar 7 03:41:26.830: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Creating a v1 custom resource 03/07/23 03:41:29.404 +STEP: Create a v2 custom resource 03/07/23 03:41:29.415 +STEP: List CRs in v1 03/07/23 03:41:29.418 +STEP: List CRs in v2 03/07/23 03:41:29.464 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:41:29.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-1678" for this suite. 03/07/23 03:41:29.984 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","completed":227,"skipped":3785,"failed":0} +------------------------------ +• [SLOW TEST] [6.728 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:23.313 + Mar 7 03:41:23.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-webhook 03/07/23 03:41:23.315 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:23.329 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:23.331 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 + STEP: Setting up server cert 03/07/23 03:41:23.333 + STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 03/07/23 03:41:23.748 + STEP: Deploying the custom resource conversion webhook pod 03/07/23 03:41:23.765 + STEP: Wait for the deployment to be ready 03/07/23 03:41:23.792 + Mar 7 03:41:23.805: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 03:41:25.812 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:41:25.827 + Mar 7 03:41:26.827: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 + [It] should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 + Mar 7 03:41:26.830: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Creating a v1 custom resource 03/07/23 03:41:29.404 + STEP: Create a v2 custom resource 03/07/23 03:41:29.415 + STEP: List CRs in v1 03/07/23 03:41:29.418 + STEP: List CRs in v2 03/07/23 03:41:29.464 + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:41:29.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-webhook-1678" for this suite. 03/07/23 03:41:29.984 + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1711 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:30.042 +Mar 7 03:41:30.043: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:41:30.045 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:30.079 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:30.085 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[BeforeEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1698 +[It] should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1711 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-2 03/07/23 03:41:30.086 +Mar 7 03:41:30.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-2' +Mar 7 03:41:30.235: INFO: stderr: "" +Mar 7 03:41:30.235: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created 03/07/23 03:41:30.236 +[AfterEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1702 +Mar 7 03:41:30.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete pods e2e-test-httpd-pod' +Mar 7 03:41:32.146: INFO: stderr: "" +Mar 7 03:41:32.147: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:41:32.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-851" for this suite. 03/07/23 03:41:32.15 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","completed":228,"skipped":3794,"failed":0} +------------------------------ +• [2.112 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl run pod + test/e2e/kubectl/kubectl.go:1695 + should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1711 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:30.042 + Mar 7 03:41:30.043: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:41:30.045 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:30.079 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:30.085 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [BeforeEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1698 + [It] should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1711 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-2 03/07/23 03:41:30.086 + Mar 7 03:41:30.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-2' + Mar 7 03:41:30.235: INFO: stderr: "" + Mar 7 03:41:30.235: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: verifying the pod e2e-test-httpd-pod was created 03/07/23 03:41:30.236 + [AfterEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1702 + Mar 7 03:41:30.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-851 delete pods e2e-test-httpd-pod' + Mar 7 03:41:32.146: INFO: stderr: "" + Mar 7 03:41:32.147: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:41:32.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-851" for this suite. 03/07/23 03:41:32.15 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:108 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:32.155 +Mar 7 03:41:32.155: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:41:32.156 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:32.172 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:32.175 +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:108 +STEP: Creating configMap with name projected-configmap-test-volume-map-14eea8b5-80e2-4a72-930c-5b4957f546aa 03/07/23 03:41:32.177 +STEP: Creating a pod to test consume configMaps 03/07/23 03:41:32.18 +Mar 7 03:41:32.188: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00" in namespace "projected-7907" to be "Succeeded or Failed" +Mar 7 03:41:32.191: INFO: Pod "pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00": Phase="Pending", Reason="", readiness=false. Elapsed: 3.290744ms +Mar 7 03:41:34.194: INFO: Pod "pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006591518s +Mar 7 03:41:36.194: INFO: Pod "pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0060812s +STEP: Saw pod success 03/07/23 03:41:36.194 +Mar 7 03:41:36.194: INFO: Pod "pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00" satisfied condition "Succeeded or Failed" +Mar 7 03:41:36.197: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00 container agnhost-container: +STEP: delete the pod 03/07/23 03:41:36.202 +Mar 7 03:41:36.240: INFO: Waiting for pod pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00 to disappear +Mar 7 03:41:36.243: INFO: Pod pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +Mar 7 03:41:36.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7907" for this suite. 03/07/23 03:41:36.246 +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","completed":229,"skipped":3810,"failed":0} +------------------------------ +• [4.097 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:108 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:32.155 + Mar 7 03:41:32.155: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:41:32.156 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:32.172 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:32.175 + [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:108 + STEP: Creating configMap with name projected-configmap-test-volume-map-14eea8b5-80e2-4a72-930c-5b4957f546aa 03/07/23 03:41:32.177 + STEP: Creating a pod to test consume configMaps 03/07/23 03:41:32.18 + Mar 7 03:41:32.188: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00" in namespace "projected-7907" to be "Succeeded or Failed" + Mar 7 03:41:32.191: INFO: Pod "pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00": Phase="Pending", Reason="", readiness=false. Elapsed: 3.290744ms + Mar 7 03:41:34.194: INFO: Pod "pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006591518s + Mar 7 03:41:36.194: INFO: Pod "pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0060812s + STEP: Saw pod success 03/07/23 03:41:36.194 + Mar 7 03:41:36.194: INFO: Pod "pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00" satisfied condition "Succeeded or Failed" + Mar 7 03:41:36.197: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00 container agnhost-container: + STEP: delete the pod 03/07/23 03:41:36.202 + Mar 7 03:41:36.240: INFO: Waiting for pod pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00 to disappear + Mar 7 03:41:36.243: INFO: Pod pod-projected-configmaps-50a6360b-19c1-464c-856c-4aefb3edbd00 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 + Mar 7 03:41:36.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-7907" for this suite. 03/07/23 03:41:36.246 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:36.257 +Mar 7 03:41:36.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename runtimeclass 03/07/23 03:41:36.258 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:36.271 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:36.273 +[It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 +Mar 7 03:41:36.284: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-8450 to be scheduled +Mar 7 03:41:36.286: INFO: 1 pods are not scheduled: [runtimeclass-8450/test-runtimeclass-runtimeclass-8450-preconfigured-handler-czgjm(249b3b5e-e095-48ec-a26d-142e09eea6f2)] +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 +Mar 7 03:41:38.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-8450" for this suite. 03/07/23 03:41:38.318 +{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]","completed":230,"skipped":3989,"failed":0} +------------------------------ +• [2.073 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:36.257 + Mar 7 03:41:36.257: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename runtimeclass 03/07/23 03:41:36.258 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:36.271 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:36.273 + [It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 + Mar 7 03:41:36.284: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-8450 to be scheduled + Mar 7 03:41:36.286: INFO: 1 pods are not scheduled: [runtimeclass-8450/test-runtimeclass-runtimeclass-8450-preconfigured-handler-czgjm(249b3b5e-e095-48ec-a26d-142e09eea6f2)] + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/framework.go:187 + Mar 7 03:41:38.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "runtimeclass-8450" for this suite. 03/07/23 03:41:38.318 + << End Captured GinkgoWriter Output +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + test/e2e/network/service.go:781 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:38.33 +Mar 7 03:41:38.331: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:41:38.332 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:38.345 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:38.348 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should provide secure master service [Conformance] + test/e2e/network/service.go:781 +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:41:38.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1633" for this suite. 03/07/23 03:41:38.355 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","completed":231,"skipped":3989,"failed":0} +------------------------------ +• [0.029 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should provide secure master service [Conformance] + test/e2e/network/service.go:781 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:38.33 + Mar 7 03:41:38.331: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:41:38.332 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:38.345 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:38.348 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should provide secure master service [Conformance] + test/e2e/network/service.go:781 + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:41:38.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-1633" for this suite. 03/07/23 03:41:38.355 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:38.364 +Mar 7 03:41:38.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubelet-test 03/07/23 03:41:38.365 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:38.382 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:38.385 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 +[It] should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 +[AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 +Mar 7 03:41:42.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-119" for this suite. 03/07/23 03:41:42.401 +{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","completed":232,"skipped":3995,"failed":0} +------------------------------ +• [4.042 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:82 + should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:38.364 + Mar 7 03:41:38.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubelet-test 03/07/23 03:41:38.365 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:38.382 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:38.385 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 + [It] should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 + [AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 + Mar 7 03:41:42.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubelet-test-119" for this suite. 03/07/23 03:41:42.401 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:42.407 +Mar 7 03:41:42.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename deployment 03/07/23 03:41:42.407 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:42.419 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:42.422 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 +STEP: creating a Deployment 03/07/23 03:41:42.428 +Mar 7 03:41:42.428: INFO: Creating simple deployment test-deployment-qd7xp +Mar 7 03:41:42.438: INFO: deployment "test-deployment-qd7xp" doesn't have the required revision set +STEP: Getting /status 03/07/23 03:41:44.447 +Mar 7 03:41:44.451: INFO: Deployment test-deployment-qd7xp has Conditions: [{Available True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-qd7xp-777898ffcc" has successfully progressed.}] +STEP: updating Deployment Status 03/07/23 03:41:44.451 +Mar 7 03:41:44.457: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 3, 41, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 41, 44, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 3, 41, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 41, 42, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-qd7xp-777898ffcc\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated 03/07/23 03:41:44.457 +Mar 7 03:41:44.458: INFO: Observed &Deployment event: ADDED +Mar 7 03:41:44.458: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-qd7xp-777898ffcc"} +Mar 7 03:41:44.458: INFO: Observed &Deployment event: MODIFIED +Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-qd7xp-777898ffcc"} +Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Mar 7 03:41:44.459: INFO: Observed &Deployment event: MODIFIED +Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-qd7xp-777898ffcc" is progressing.} +Mar 7 03:41:44.459: INFO: Observed &Deployment event: MODIFIED +Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-qd7xp-777898ffcc" has successfully progressed.} +Mar 7 03:41:44.459: INFO: Observed &Deployment event: MODIFIED +Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-qd7xp-777898ffcc" has successfully progressed.} +Mar 7 03:41:44.459: INFO: Found Deployment test-deployment-qd7xp in namespace deployment-6174 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Mar 7 03:41:44.459: INFO: Deployment test-deployment-qd7xp has an updated status +STEP: patching the Statefulset Status 03/07/23 03:41:44.459 +Mar 7 03:41:44.459: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Mar 7 03:41:44.464: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched 03/07/23 03:41:44.464 +Mar 7 03:41:44.465: INFO: Observed &Deployment event: ADDED +Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-qd7xp-777898ffcc"} +Mar 7 03:41:44.465: INFO: Observed &Deployment event: MODIFIED +Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-qd7xp-777898ffcc"} +Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Mar 7 03:41:44.465: INFO: Observed &Deployment event: MODIFIED +Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-qd7xp-777898ffcc" is progressing.} +Mar 7 03:41:44.465: INFO: Observed &Deployment event: MODIFIED +Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-qd7xp-777898ffcc" has successfully progressed.} +Mar 7 03:41:44.466: INFO: Observed &Deployment event: MODIFIED +Mar 7 03:41:44.466: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Mar 7 03:41:44.466: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-qd7xp-777898ffcc" has successfully progressed.} +Mar 7 03:41:44.466: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Mar 7 03:41:44.466: INFO: Observed &Deployment event: MODIFIED +Mar 7 03:41:44.466: INFO: Found deployment test-deployment-qd7xp in namespace deployment-6174 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Mar 7 03:41:44.466: INFO: Deployment test-deployment-qd7xp has a patched status +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Mar 7 03:41:44.468: INFO: Deployment "test-deployment-qd7xp": +&Deployment{ObjectMeta:{test-deployment-qd7xp deployment-6174 bbd543cd-df8e-4fcb-9874-2505bd0598a4 65169 1 2023-03-07 03:41:42 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-03-07 03:41:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2023-03-07 03:41:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2023-03-07 03:41:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00666aa08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-qd7xp-777898ffcc",LastUpdateTime:2023-03-07 03:41:44 +0000 UTC,LastTransitionTime:2023-03-07 03:41:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Mar 7 03:41:44.470: INFO: New ReplicaSet "test-deployment-qd7xp-777898ffcc" of Deployment "test-deployment-qd7xp": +&ReplicaSet{ObjectMeta:{test-deployment-qd7xp-777898ffcc deployment-6174 3ee55b75-53e9-4947-a41e-db2962e35446 65165 1 2023-03-07 03:41:42 +0000 UTC map[e2e:testing name:httpd pod-template-hash:777898ffcc] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-qd7xp bbd543cd-df8e-4fcb-9874-2505bd0598a4 0xc0058803e0 0xc0058803e1}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:41:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbd543cd-df8e-4fcb-9874-2505bd0598a4\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:41:44 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 777898ffcc,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:777898ffcc] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005880498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Mar 7 03:41:44.473: INFO: Pod "test-deployment-qd7xp-777898ffcc-4jtvn" is available: +&Pod{ObjectMeta:{test-deployment-qd7xp-777898ffcc-4jtvn test-deployment-qd7xp-777898ffcc- deployment-6174 3e99050d-37dd-45b7-bb2a-9f53e6639fc1 65164 0 2023-03-07 03:41:42 +0000 UTC map[e2e:testing name:httpd pod-template-hash:777898ffcc] map[cni.projectcalico.org/containerID:a7084d29965c2649d548304726a6a249cd0341940b15056fcf8bd91bec920914 cni.projectcalico.org/podIP:10.233.247.10/32 cni.projectcalico.org/podIPs:10.233.247.10/32] [{apps/v1 ReplicaSet test-deployment-qd7xp-777898ffcc 3ee55b75-53e9-4947-a41e-db2962e35446 0xc005880850 0xc005880851}] [] [{calico Update v1 2023-03-07 03:41:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-03-07 03:41:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3ee55b75-53e9-4947-a41e-db2962e35446\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:41:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qnmwq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnmwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:41:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:41:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:41:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:41:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.10,StartTime:2023-03-07 03:41:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:41:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://bf5ff657b09079a1b69e6a1e2ee735081a45a43b9037b12db0a3888abcb623b0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +Mar 7 03:41:44.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-6174" for this suite. 03/07/23 03:41:44.476 +{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","completed":233,"skipped":4008,"failed":0} +------------------------------ +• [2.073 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:42.407 + Mar 7 03:41:42.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename deployment 03/07/23 03:41:42.407 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:42.419 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:42.422 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 + STEP: creating a Deployment 03/07/23 03:41:42.428 + Mar 7 03:41:42.428: INFO: Creating simple deployment test-deployment-qd7xp + Mar 7 03:41:42.438: INFO: deployment "test-deployment-qd7xp" doesn't have the required revision set + STEP: Getting /status 03/07/23 03:41:44.447 + Mar 7 03:41:44.451: INFO: Deployment test-deployment-qd7xp has Conditions: [{Available True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-qd7xp-777898ffcc" has successfully progressed.}] + STEP: updating Deployment Status 03/07/23 03:41:44.451 + Mar 7 03:41:44.457: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 3, 41, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 41, 44, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 3, 41, 44, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 3, 41, 42, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-qd7xp-777898ffcc\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the Deployment status to be updated 03/07/23 03:41:44.457 + Mar 7 03:41:44.458: INFO: Observed &Deployment event: ADDED + Mar 7 03:41:44.458: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-qd7xp-777898ffcc"} + Mar 7 03:41:44.458: INFO: Observed &Deployment event: MODIFIED + Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-qd7xp-777898ffcc"} + Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Mar 7 03:41:44.459: INFO: Observed &Deployment event: MODIFIED + Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-qd7xp-777898ffcc" is progressing.} + Mar 7 03:41:44.459: INFO: Observed &Deployment event: MODIFIED + Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-qd7xp-777898ffcc" has successfully progressed.} + Mar 7 03:41:44.459: INFO: Observed &Deployment event: MODIFIED + Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Mar 7 03:41:44.459: INFO: Observed Deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-qd7xp-777898ffcc" has successfully progressed.} + Mar 7 03:41:44.459: INFO: Found Deployment test-deployment-qd7xp in namespace deployment-6174 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Mar 7 03:41:44.459: INFO: Deployment test-deployment-qd7xp has an updated status + STEP: patching the Statefulset Status 03/07/23 03:41:44.459 + Mar 7 03:41:44.459: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Mar 7 03:41:44.464: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Deployment status to be patched 03/07/23 03:41:44.464 + Mar 7 03:41:44.465: INFO: Observed &Deployment event: ADDED + Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-qd7xp-777898ffcc"} + Mar 7 03:41:44.465: INFO: Observed &Deployment event: MODIFIED + Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-qd7xp-777898ffcc"} + Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Mar 7 03:41:44.465: INFO: Observed &Deployment event: MODIFIED + Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:42 +0000 UTC 2023-03-07 03:41:42 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-qd7xp-777898ffcc" is progressing.} + Mar 7 03:41:44.465: INFO: Observed &Deployment event: MODIFIED + Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Mar 7 03:41:44.465: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-qd7xp-777898ffcc" has successfully progressed.} + Mar 7 03:41:44.466: INFO: Observed &Deployment event: MODIFIED + Mar 7 03:41:44.466: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Mar 7 03:41:44.466: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-07 03:41:44 +0000 UTC 2023-03-07 03:41:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-qd7xp-777898ffcc" has successfully progressed.} + Mar 7 03:41:44.466: INFO: Observed deployment test-deployment-qd7xp in namespace deployment-6174 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Mar 7 03:41:44.466: INFO: Observed &Deployment event: MODIFIED + Mar 7 03:41:44.466: INFO: Found deployment test-deployment-qd7xp in namespace deployment-6174 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } + Mar 7 03:41:44.466: INFO: Deployment test-deployment-qd7xp has a patched status + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Mar 7 03:41:44.468: INFO: Deployment "test-deployment-qd7xp": + &Deployment{ObjectMeta:{test-deployment-qd7xp deployment-6174 bbd543cd-df8e-4fcb-9874-2505bd0598a4 65169 1 2023-03-07 03:41:42 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-03-07 03:41:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2023-03-07 03:41:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2023-03-07 03:41:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00666aa08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-qd7xp-777898ffcc",LastUpdateTime:2023-03-07 03:41:44 +0000 UTC,LastTransitionTime:2023-03-07 03:41:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Mar 7 03:41:44.470: INFO: New ReplicaSet "test-deployment-qd7xp-777898ffcc" of Deployment "test-deployment-qd7xp": + &ReplicaSet{ObjectMeta:{test-deployment-qd7xp-777898ffcc deployment-6174 3ee55b75-53e9-4947-a41e-db2962e35446 65165 1 2023-03-07 03:41:42 +0000 UTC map[e2e:testing name:httpd pod-template-hash:777898ffcc] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-qd7xp bbd543cd-df8e-4fcb-9874-2505bd0598a4 0xc0058803e0 0xc0058803e1}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:41:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bbd543cd-df8e-4fcb-9874-2505bd0598a4\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:41:44 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 777898ffcc,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:777898ffcc] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005880498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Mar 7 03:41:44.473: INFO: Pod "test-deployment-qd7xp-777898ffcc-4jtvn" is available: + &Pod{ObjectMeta:{test-deployment-qd7xp-777898ffcc-4jtvn test-deployment-qd7xp-777898ffcc- deployment-6174 3e99050d-37dd-45b7-bb2a-9f53e6639fc1 65164 0 2023-03-07 03:41:42 +0000 UTC map[e2e:testing name:httpd pod-template-hash:777898ffcc] map[cni.projectcalico.org/containerID:a7084d29965c2649d548304726a6a249cd0341940b15056fcf8bd91bec920914 cni.projectcalico.org/podIP:10.233.247.10/32 cni.projectcalico.org/podIPs:10.233.247.10/32] [{apps/v1 ReplicaSet test-deployment-qd7xp-777898ffcc 3ee55b75-53e9-4947-a41e-db2962e35446 0xc005880850 0xc005880851}] [] [{calico Update v1 2023-03-07 03:41:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-03-07 03:41:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3ee55b75-53e9-4947-a41e-db2962e35446\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:41:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qnmwq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnmwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:41:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:41:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:41:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:41:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.10,StartTime:2023-03-07 03:41:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:41:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://bf5ff657b09079a1b69e6a1e2ee735081a45a43b9037b12db0a3888abcb623b0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 + Mar 7 03:41:44.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "deployment-6174" for this suite. 03/07/23 03:41:44.476 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:91 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:44.482 +Mar 7 03:41:44.482: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename var-expansion 03/07/23 03:41:44.483 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:44.495 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:44.497 +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:91 +STEP: Creating a pod to test substitution in container's args 03/07/23 03:41:44.498 +Mar 7 03:41:44.504: INFO: Waiting up to 5m0s for pod "var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95" in namespace "var-expansion-9242" to be "Succeeded or Failed" +Mar 7 03:41:44.506: INFO: Pod "var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95": Phase="Pending", Reason="", readiness=false. Elapsed: 1.839811ms +Mar 7 03:41:46.509: INFO: Pod "var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004766064s +Mar 7 03:41:48.509: INFO: Pod "var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005027301s +STEP: Saw pod success 03/07/23 03:41:48.509 +Mar 7 03:41:48.509: INFO: Pod "var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95" satisfied condition "Succeeded or Failed" +Mar 7 03:41:48.512: INFO: Trying to get logs from node node-2 pod var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95 container dapi-container: +STEP: delete the pod 03/07/23 03:41:48.516 +Mar 7 03:41:48.523: INFO: Waiting for pod var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95 to disappear +Mar 7 03:41:48.525: INFO: Pod var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95 no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +Mar 7 03:41:48.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9242" for this suite. 03/07/23 03:41:48.528 +{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","completed":234,"skipped":4066,"failed":0} +------------------------------ +• [4.050 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:91 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:44.482 + Mar 7 03:41:44.482: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename var-expansion 03/07/23 03:41:44.483 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:44.495 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:44.497 + [It] should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:91 + STEP: Creating a pod to test substitution in container's args 03/07/23 03:41:44.498 + Mar 7 03:41:44.504: INFO: Waiting up to 5m0s for pod "var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95" in namespace "var-expansion-9242" to be "Succeeded or Failed" + Mar 7 03:41:44.506: INFO: Pod "var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95": Phase="Pending", Reason="", readiness=false. Elapsed: 1.839811ms + Mar 7 03:41:46.509: INFO: Pod "var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004766064s + Mar 7 03:41:48.509: INFO: Pod "var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005027301s + STEP: Saw pod success 03/07/23 03:41:48.509 + Mar 7 03:41:48.509: INFO: Pod "var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95" satisfied condition "Succeeded or Failed" + Mar 7 03:41:48.512: INFO: Trying to get logs from node node-2 pod var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95 container dapi-container: + STEP: delete the pod 03/07/23 03:41:48.516 + Mar 7 03:41:48.523: INFO: Waiting for pod var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95 to disappear + Mar 7 03:41:48.525: INFO: Pod var-expansion-ab231372-0685-43c5-9fe0-8744d01ace95 no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 + Mar 7 03:41:48.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "var-expansion-9242" for this suite. 03/07/23 03:41:48.528 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:41:48.537 +Mar 7 03:41:48.537: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename gc 03/07/23 03:41:48.538 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:48.55 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:48.553 +[It] should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 +STEP: create the rc 03/07/23 03:41:48.558 +STEP: delete the rc 03/07/23 03:41:53.585 +STEP: wait for the rc to be deleted 03/07/23 03:41:53.601 +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods 03/07/23 03:41:58.605 +STEP: Gathering metrics 03/07/23 03:42:28.621 +Mar 7 03:42:28.636: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" +Mar 7 03:42:28.638: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.469255ms +Mar 7 03:42:28.638: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) +Mar 7 03:42:28.638: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" +E0307 03:42:28.661040 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:28.661040 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:29.695111 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:29.695111 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:30.729720 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:30.729720 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:31.762576 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:31.762576 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:32.796483 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:32.796483 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:33.820436 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:33.820436 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:38.957796 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:38.957796 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:40.007962 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:40.007962 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:41.032776 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:41.032776 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:42.059259 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:42.059259 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:43.082393 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:43.082393 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:44.102834 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:44.102834 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:46.146604 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:46.146604 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:49.213612 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:49.213612 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:50.238094 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:50.238094 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:50.978733 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:50.978733 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:52.000567 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:52.000567 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:53.028836 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:53.028836 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:54.051511 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:54.051511 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:55.072468 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:55.072468 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:56.102151 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:56.102151 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:57.122115 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:57.122115 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:58.144555 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:58.144555 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:59.169532 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:42:59.169532 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:01.980791 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:01.980791 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:03.002546 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:03.002546 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:04.036672 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:04.036672 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:05.058482 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:05.058482 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:07.107205 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:07.107205 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:08.130102 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:08.130102 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:09.156027 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:09.156027 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:11.202115 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:11.202115 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:14.011296 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:14.011296 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:18.099724 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:18.099724 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:19.130691 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:19.130691 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:20.152860 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:20.152860 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:21.176110 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:21.176110 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:22.199732 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:22.199732 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:23.223424 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:23.223424 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:23.980062 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:23.980062 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:25.000523 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:25.000523 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:26.021651 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:26.021651 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:28.074923 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:28.074923 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:29.098932 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:29.098932 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:30.120291 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:30.120291 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:32.164823 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:32.164823 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:34.209248 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:34.209248 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:34.983221 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:34.983221 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:39.074472 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:39.074472 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:40.096021 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:40.096021 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:41.122873 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:41.122873 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:42.146230 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:42.146230 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:43.167935 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:43.167935 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:45.218365 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:45.218365 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:46.260563 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:46.260563 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:48.308543 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:48.308543 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:49.357738 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:49.357738 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:50.388872 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:50.388872 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:51.415059 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:51.415059 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:52.444049 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:52.444049 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:53.465794 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:53.465794 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:55.519026 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:43:55.519026 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +Mar 7 03:43:55.519: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +Mar 7 03:43:55.519: INFO: Deleting pod "simpletest.rc-26tlv" in namespace "gc-3397" +Mar 7 03:43:55.533: INFO: Deleting pod "simpletest.rc-27lql" in namespace "gc-3397" +Mar 7 03:43:55.557: INFO: Deleting pod "simpletest.rc-47qgz" in namespace "gc-3397" +Mar 7 03:43:55.589: INFO: Deleting pod "simpletest.rc-4c9vs" in namespace "gc-3397" +Mar 7 03:43:55.615: INFO: Deleting pod "simpletest.rc-4sfhz" in namespace "gc-3397" +Mar 7 03:43:55.652: INFO: Deleting pod "simpletest.rc-4vkjw" in namespace "gc-3397" +Mar 7 03:43:55.678: INFO: Deleting pod "simpletest.rc-4wncp" in namespace "gc-3397" +Mar 7 03:43:55.724: INFO: Deleting pod "simpletest.rc-5dfbl" in namespace "gc-3397" +Mar 7 03:43:55.745: INFO: Deleting pod "simpletest.rc-62cfh" in namespace "gc-3397" +Mar 7 03:43:55.783: INFO: Deleting pod "simpletest.rc-65j8f" in namespace "gc-3397" +Mar 7 03:43:55.803: INFO: Deleting pod "simpletest.rc-6lvl5" in namespace "gc-3397" +Mar 7 03:43:55.845: INFO: Deleting pod "simpletest.rc-6v8sk" in namespace "gc-3397" +Mar 7 03:43:55.900: INFO: Deleting pod "simpletest.rc-75mf6" in namespace "gc-3397" +Mar 7 03:43:55.924: INFO: Deleting pod "simpletest.rc-78jkl" in namespace "gc-3397" +Mar 7 03:43:55.955: INFO: Deleting pod "simpletest.rc-7h7w7" in namespace "gc-3397" +Mar 7 03:43:55.990: INFO: Deleting pod "simpletest.rc-7q7d9" in namespace "gc-3397" +Mar 7 03:43:56.029: INFO: Deleting pod "simpletest.rc-85827" in namespace "gc-3397" +Mar 7 03:43:56.059: INFO: Deleting pod "simpletest.rc-8d9lr" in namespace "gc-3397" +Mar 7 03:43:56.096: INFO: Deleting pod "simpletest.rc-8ktxx" in namespace "gc-3397" +Mar 7 03:43:56.152: INFO: Deleting pod "simpletest.rc-8nkgw" in namespace "gc-3397" +Mar 7 03:43:56.181: INFO: Deleting pod "simpletest.rc-8z7gc" in namespace "gc-3397" +Mar 7 03:43:56.212: INFO: Deleting pod "simpletest.rc-95vrt" in namespace "gc-3397" +Mar 7 03:43:56.264: INFO: Deleting pod "simpletest.rc-9972z" in namespace "gc-3397" +Mar 7 03:43:56.303: INFO: Deleting pod "simpletest.rc-9jgcw" in namespace "gc-3397" +Mar 7 03:43:56.344: INFO: Deleting pod "simpletest.rc-9jklx" in namespace "gc-3397" +Mar 7 03:43:56.371: INFO: Deleting pod "simpletest.rc-9vrj7" in namespace "gc-3397" +Mar 7 03:43:56.388: INFO: Deleting pod "simpletest.rc-9wq66" in namespace "gc-3397" +Mar 7 03:43:56.407: INFO: Deleting pod "simpletest.rc-b6s2x" in namespace "gc-3397" +Mar 7 03:43:56.433: INFO: Deleting pod "simpletest.rc-b8fx6" in namespace "gc-3397" +Mar 7 03:43:56.509: INFO: Deleting pod "simpletest.rc-b9bdw" in namespace "gc-3397" +Mar 7 03:43:56.572: INFO: Deleting pod "simpletest.rc-bvgg7" in namespace "gc-3397" +Mar 7 03:43:56.596: INFO: Deleting pod "simpletest.rc-bzgfv" in namespace "gc-3397" +Mar 7 03:43:56.630: INFO: Deleting pod "simpletest.rc-c5dhv" in namespace "gc-3397" +Mar 7 03:43:56.700: INFO: Deleting pod "simpletest.rc-c6rlg" in namespace "gc-3397" +Mar 7 03:43:56.828: INFO: Deleting pod "simpletest.rc-c8mcl" in namespace "gc-3397" +Mar 7 03:43:56.866: INFO: Deleting pod "simpletest.rc-cb55d" in namespace "gc-3397" +Mar 7 03:43:56.882: INFO: Deleting pod "simpletest.rc-cbv6x" in namespace "gc-3397" +Mar 7 03:43:56.908: INFO: Deleting pod "simpletest.rc-d4ct5" in namespace "gc-3397" +Mar 7 03:43:56.943: INFO: Deleting pod "simpletest.rc-dftqq" in namespace "gc-3397" +Mar 7 03:43:57.010: INFO: Deleting pod "simpletest.rc-dxfsw" in namespace "gc-3397" +Mar 7 03:43:57.041: INFO: Deleting pod "simpletest.rc-fk6c6" in namespace "gc-3397" +Mar 7 03:43:57.072: INFO: Deleting pod "simpletest.rc-gcgrk" in namespace "gc-3397" +Mar 7 03:43:57.115: INFO: Deleting pod "simpletest.rc-gglv4" in namespace "gc-3397" +Mar 7 03:43:57.142: INFO: Deleting pod "simpletest.rc-grqzm" in namespace "gc-3397" +Mar 7 03:43:57.180: INFO: Deleting pod "simpletest.rc-h5pb7" in namespace "gc-3397" +Mar 7 03:43:57.234: INFO: Deleting pod "simpletest.rc-hthfs" in namespace "gc-3397" +Mar 7 03:43:57.284: INFO: Deleting pod "simpletest.rc-j4xms" in namespace "gc-3397" +Mar 7 03:43:57.350: INFO: Deleting pod "simpletest.rc-j9zwl" in namespace "gc-3397" +Mar 7 03:43:57.389: INFO: Deleting pod "simpletest.rc-jggmm" in namespace "gc-3397" +Mar 7 03:43:57.426: INFO: Deleting pod "simpletest.rc-jrmfc" in namespace "gc-3397" +Mar 7 03:43:57.465: INFO: Deleting pod "simpletest.rc-jzbll" in namespace "gc-3397" +Mar 7 03:43:57.502: INFO: Deleting pod "simpletest.rc-k2hx7" in namespace "gc-3397" +Mar 7 03:43:57.577: INFO: Deleting pod "simpletest.rc-k569z" in namespace "gc-3397" +Mar 7 03:43:57.646: INFO: Deleting pod "simpletest.rc-k9pcf" in namespace "gc-3397" +Mar 7 03:43:57.731: INFO: Deleting pod "simpletest.rc-kg9z9" in namespace "gc-3397" +Mar 7 03:43:57.763: INFO: Deleting pod "simpletest.rc-ktjm2" in namespace "gc-3397" +Mar 7 03:43:57.783: INFO: Deleting pod "simpletest.rc-l4cd2" in namespace "gc-3397" +Mar 7 03:43:57.817: INFO: Deleting pod "simpletest.rc-l55n2" in namespace "gc-3397" +Mar 7 03:43:57.854: INFO: Deleting pod "simpletest.rc-ljqm6" in namespace "gc-3397" +Mar 7 03:43:57.886: INFO: Deleting pod "simpletest.rc-lx8gn" in namespace "gc-3397" +Mar 7 03:43:57.922: INFO: Deleting pod "simpletest.rc-m5gvn" in namespace "gc-3397" +Mar 7 03:43:57.960: INFO: Deleting pod "simpletest.rc-m8rtd" in namespace "gc-3397" +Mar 7 03:43:57.994: INFO: Deleting pod "simpletest.rc-mb67f" in namespace "gc-3397" +Mar 7 03:43:58.030: INFO: Deleting pod "simpletest.rc-mhhtx" in namespace "gc-3397" +Mar 7 03:43:58.052: INFO: Deleting pod "simpletest.rc-mlvqv" in namespace "gc-3397" +Mar 7 03:43:58.073: INFO: Deleting pod "simpletest.rc-mzcf9" in namespace "gc-3397" +Mar 7 03:43:58.104: INFO: Deleting pod "simpletest.rc-nbql4" in namespace "gc-3397" +Mar 7 03:43:58.135: INFO: Deleting pod "simpletest.rc-nlwfh" in namespace "gc-3397" +Mar 7 03:43:58.181: INFO: Deleting pod "simpletest.rc-nmftv" in namespace "gc-3397" +Mar 7 03:43:58.224: INFO: Deleting pod "simpletest.rc-nnm25" in namespace "gc-3397" +Mar 7 03:43:58.272: INFO: Deleting pod "simpletest.rc-nr86b" in namespace "gc-3397" +Mar 7 03:43:58.310: INFO: Deleting pod "simpletest.rc-p6gb4" in namespace "gc-3397" +Mar 7 03:43:58.379: INFO: Deleting pod "simpletest.rc-pk5tt" in namespace "gc-3397" +Mar 7 03:43:58.423: INFO: Deleting pod "simpletest.rc-qlslg" in namespace "gc-3397" +Mar 7 03:43:58.450: INFO: Deleting pod "simpletest.rc-qmjcz" in namespace "gc-3397" +Mar 7 03:43:58.482: INFO: Deleting pod "simpletest.rc-qnvnx" in namespace "gc-3397" +Mar 7 03:43:58.510: INFO: Deleting pod "simpletest.rc-rbv7w" in namespace "gc-3397" +Mar 7 03:43:58.548: INFO: Deleting pod "simpletest.rc-rdvtn" in namespace "gc-3397" +Mar 7 03:43:58.570: INFO: Deleting pod "simpletest.rc-rwxjk" in namespace "gc-3397" +Mar 7 03:43:58.587: INFO: Deleting pod "simpletest.rc-s2pw7" in namespace "gc-3397" +Mar 7 03:43:58.612: INFO: Deleting pod "simpletest.rc-twpgs" in namespace "gc-3397" +Mar 7 03:43:58.646: INFO: Deleting pod "simpletest.rc-v9t5q" in namespace "gc-3397" +Mar 7 03:43:58.692: INFO: Deleting pod "simpletest.rc-vhgnp" in namespace "gc-3397" +Mar 7 03:43:58.716: INFO: Deleting pod "simpletest.rc-vhvqq" in namespace "gc-3397" +Mar 7 03:43:58.761: INFO: Deleting pod "simpletest.rc-vkqxk" in namespace "gc-3397" +Mar 7 03:43:58.819: INFO: Deleting pod "simpletest.rc-vntdm" in namespace "gc-3397" +Mar 7 03:43:58.861: INFO: Deleting pod "simpletest.rc-x2nb5" in namespace "gc-3397" +Mar 7 03:43:58.886: INFO: Deleting pod "simpletest.rc-x9ndm" in namespace "gc-3397" +Mar 7 03:43:58.936: INFO: Deleting pod "simpletest.rc-xhq9j" in namespace "gc-3397" +Mar 7 03:43:59.003: INFO: Deleting pod "simpletest.rc-xjffw" in namespace "gc-3397" +Mar 7 03:43:59.045: INFO: Deleting pod "simpletest.rc-xkvgn" in namespace "gc-3397" +Mar 7 03:43:59.081: INFO: Deleting pod "simpletest.rc-xqv58" in namespace "gc-3397" +Mar 7 03:43:59.104: INFO: Deleting pod "simpletest.rc-xw7t9" in namespace "gc-3397" +Mar 7 03:43:59.163: INFO: Deleting pod "simpletest.rc-xx7h7" in namespace "gc-3397" +Mar 7 03:43:59.192: INFO: Deleting pod "simpletest.rc-z4ztb" in namespace "gc-3397" +Mar 7 03:43:59.251: INFO: Deleting pod "simpletest.rc-z96tk" in namespace "gc-3397" +Mar 7 03:43:59.289: INFO: Deleting pod "simpletest.rc-zdw7g" in namespace "gc-3397" +Mar 7 03:43:59.336: INFO: Deleting pod "simpletest.rc-zfhsk" in namespace "gc-3397" +Mar 7 03:43:59.399: INFO: Deleting pod "simpletest.rc-zjkm7" in namespace "gc-3397" +Mar 7 03:43:59.442: INFO: Deleting pod "simpletest.rc-zqfrr" in namespace "gc-3397" +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +Mar 7 03:43:59.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-3397" for this suite. 03/07/23 03:43:59.503 +{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","completed":235,"skipped":4085,"failed":0} +------------------------------ +• [SLOW TEST] [130.981 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:41:48.537 + Mar 7 03:41:48.537: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename gc 03/07/23 03:41:48.538 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:41:48.55 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:41:48.553 + [It] should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 + STEP: create the rc 03/07/23 03:41:48.558 + STEP: delete the rc 03/07/23 03:41:53.585 + STEP: wait for the rc to be deleted 03/07/23 03:41:53.601 + STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods 03/07/23 03:41:58.605 + STEP: Gathering metrics 03/07/23 03:42:28.621 + Mar 7 03:42:28.636: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" + Mar 7 03:42:28.638: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.469255ms + Mar 7 03:42:28.638: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) + Mar 7 03:42:28.638: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" + E0307 03:42:28.661040 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:29.695111 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:30.729720 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:31.762576 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:32.796483 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:33.820436 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:38.957796 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:40.007962 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:41.032776 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:42.059259 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:43.082393 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:44.102834 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:46.146604 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:49.213612 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:50.238094 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:50.978733 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:52.000567 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:53.028836 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:54.051511 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:55.072468 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:56.102151 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:57.122115 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:58.144555 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:42:59.169532 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:01.980791 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:03.002546 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:04.036672 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:05.058482 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:07.107205 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:08.130102 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:09.156027 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:11.202115 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:14.011296 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:18.099724 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:19.130691 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:20.152860 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:21.176110 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:22.199732 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:23.223424 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:23.980062 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:25.000523 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:26.021651 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:28.074923 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:29.098932 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:30.120291 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:32.164823 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:34.209248 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:34.983221 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:39.074472 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:40.096021 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:41.122873 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:42.146230 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:43.167935 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:45.218365 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:46.260563 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:48.308543 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:49.357738 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:50.388872 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:51.415059 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:52.444049 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:53.465794 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:43:55.519026 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + Mar 7 03:43:55.519: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. + Mar 7 03:43:55.519: INFO: Deleting pod "simpletest.rc-26tlv" in namespace "gc-3397" + Mar 7 03:43:55.533: INFO: Deleting pod "simpletest.rc-27lql" in namespace "gc-3397" + Mar 7 03:43:55.557: INFO: Deleting pod "simpletest.rc-47qgz" in namespace "gc-3397" + Mar 7 03:43:55.589: INFO: Deleting pod "simpletest.rc-4c9vs" in namespace "gc-3397" + Mar 7 03:43:55.615: INFO: Deleting pod "simpletest.rc-4sfhz" in namespace "gc-3397" + Mar 7 03:43:55.652: INFO: Deleting pod "simpletest.rc-4vkjw" in namespace "gc-3397" + Mar 7 03:43:55.678: INFO: Deleting pod "simpletest.rc-4wncp" in namespace "gc-3397" + Mar 7 03:43:55.724: INFO: Deleting pod "simpletest.rc-5dfbl" in namespace "gc-3397" + Mar 7 03:43:55.745: INFO: Deleting pod "simpletest.rc-62cfh" in namespace "gc-3397" + Mar 7 03:43:55.783: INFO: Deleting pod "simpletest.rc-65j8f" in namespace "gc-3397" + Mar 7 03:43:55.803: INFO: Deleting pod "simpletest.rc-6lvl5" in namespace "gc-3397" + Mar 7 03:43:55.845: INFO: Deleting pod "simpletest.rc-6v8sk" in namespace "gc-3397" + Mar 7 03:43:55.900: INFO: Deleting pod "simpletest.rc-75mf6" in namespace "gc-3397" + Mar 7 03:43:55.924: INFO: Deleting pod "simpletest.rc-78jkl" in namespace "gc-3397" + Mar 7 03:43:55.955: INFO: Deleting pod "simpletest.rc-7h7w7" in namespace "gc-3397" + Mar 7 03:43:55.990: INFO: Deleting pod "simpletest.rc-7q7d9" in namespace "gc-3397" + Mar 7 03:43:56.029: INFO: Deleting pod "simpletest.rc-85827" in namespace "gc-3397" + Mar 7 03:43:56.059: INFO: Deleting pod "simpletest.rc-8d9lr" in namespace "gc-3397" + Mar 7 03:43:56.096: INFO: Deleting pod "simpletest.rc-8ktxx" in namespace "gc-3397" + Mar 7 03:43:56.152: INFO: Deleting pod "simpletest.rc-8nkgw" in namespace "gc-3397" + Mar 7 03:43:56.181: INFO: Deleting pod "simpletest.rc-8z7gc" in namespace "gc-3397" + Mar 7 03:43:56.212: INFO: Deleting pod "simpletest.rc-95vrt" in namespace "gc-3397" + Mar 7 03:43:56.264: INFO: Deleting pod "simpletest.rc-9972z" in namespace "gc-3397" + Mar 7 03:43:56.303: INFO: Deleting pod "simpletest.rc-9jgcw" in namespace "gc-3397" + Mar 7 03:43:56.344: INFO: Deleting pod "simpletest.rc-9jklx" in namespace "gc-3397" + Mar 7 03:43:56.371: INFO: Deleting pod "simpletest.rc-9vrj7" in namespace "gc-3397" + Mar 7 03:43:56.388: INFO: Deleting pod "simpletest.rc-9wq66" in namespace "gc-3397" + Mar 7 03:43:56.407: INFO: Deleting pod "simpletest.rc-b6s2x" in namespace "gc-3397" + Mar 7 03:43:56.433: INFO: Deleting pod "simpletest.rc-b8fx6" in namespace "gc-3397" + Mar 7 03:43:56.509: INFO: Deleting pod "simpletest.rc-b9bdw" in namespace "gc-3397" + Mar 7 03:43:56.572: INFO: Deleting pod "simpletest.rc-bvgg7" in namespace "gc-3397" + Mar 7 03:43:56.596: INFO: Deleting pod "simpletest.rc-bzgfv" in namespace "gc-3397" + Mar 7 03:43:56.630: INFO: Deleting pod "simpletest.rc-c5dhv" in namespace "gc-3397" + Mar 7 03:43:56.700: INFO: Deleting pod "simpletest.rc-c6rlg" in namespace "gc-3397" + Mar 7 03:43:56.828: INFO: Deleting pod "simpletest.rc-c8mcl" in namespace "gc-3397" + Mar 7 03:43:56.866: INFO: Deleting pod "simpletest.rc-cb55d" in namespace "gc-3397" + Mar 7 03:43:56.882: INFO: Deleting pod "simpletest.rc-cbv6x" in namespace "gc-3397" + Mar 7 03:43:56.908: INFO: Deleting pod "simpletest.rc-d4ct5" in namespace "gc-3397" + Mar 7 03:43:56.943: INFO: Deleting pod "simpletest.rc-dftqq" in namespace "gc-3397" + Mar 7 03:43:57.010: INFO: Deleting pod "simpletest.rc-dxfsw" in namespace "gc-3397" + Mar 7 03:43:57.041: INFO: Deleting pod "simpletest.rc-fk6c6" in namespace "gc-3397" + Mar 7 03:43:57.072: INFO: Deleting pod "simpletest.rc-gcgrk" in namespace "gc-3397" + Mar 7 03:43:57.115: INFO: Deleting pod "simpletest.rc-gglv4" in namespace "gc-3397" + Mar 7 03:43:57.142: INFO: Deleting pod "simpletest.rc-grqzm" in namespace "gc-3397" + Mar 7 03:43:57.180: INFO: Deleting pod "simpletest.rc-h5pb7" in namespace "gc-3397" + Mar 7 03:43:57.234: INFO: Deleting pod "simpletest.rc-hthfs" in namespace "gc-3397" + Mar 7 03:43:57.284: INFO: Deleting pod "simpletest.rc-j4xms" in namespace "gc-3397" + Mar 7 03:43:57.350: INFO: Deleting pod "simpletest.rc-j9zwl" in namespace "gc-3397" + Mar 7 03:43:57.389: INFO: Deleting pod "simpletest.rc-jggmm" in namespace "gc-3397" + Mar 7 03:43:57.426: INFO: Deleting pod "simpletest.rc-jrmfc" in namespace "gc-3397" + Mar 7 03:43:57.465: INFO: Deleting pod "simpletest.rc-jzbll" in namespace "gc-3397" + Mar 7 03:43:57.502: INFO: Deleting pod "simpletest.rc-k2hx7" in namespace "gc-3397" + Mar 7 03:43:57.577: INFO: Deleting pod "simpletest.rc-k569z" in namespace "gc-3397" + Mar 7 03:43:57.646: INFO: Deleting pod "simpletest.rc-k9pcf" in namespace "gc-3397" + Mar 7 03:43:57.731: INFO: Deleting pod "simpletest.rc-kg9z9" in namespace "gc-3397" + Mar 7 03:43:57.763: INFO: Deleting pod "simpletest.rc-ktjm2" in namespace "gc-3397" + Mar 7 03:43:57.783: INFO: Deleting pod "simpletest.rc-l4cd2" in namespace "gc-3397" + Mar 7 03:43:57.817: INFO: Deleting pod "simpletest.rc-l55n2" in namespace "gc-3397" + Mar 7 03:43:57.854: INFO: Deleting pod "simpletest.rc-ljqm6" in namespace "gc-3397" + Mar 7 03:43:57.886: INFO: Deleting pod "simpletest.rc-lx8gn" in namespace "gc-3397" + Mar 7 03:43:57.922: INFO: Deleting pod "simpletest.rc-m5gvn" in namespace "gc-3397" + Mar 7 03:43:57.960: INFO: Deleting pod "simpletest.rc-m8rtd" in namespace "gc-3397" + Mar 7 03:43:57.994: INFO: Deleting pod "simpletest.rc-mb67f" in namespace "gc-3397" + Mar 7 03:43:58.030: INFO: Deleting pod "simpletest.rc-mhhtx" in namespace "gc-3397" + Mar 7 03:43:58.052: INFO: Deleting pod "simpletest.rc-mlvqv" in namespace "gc-3397" + Mar 7 03:43:58.073: INFO: Deleting pod "simpletest.rc-mzcf9" in namespace "gc-3397" + Mar 7 03:43:58.104: INFO: Deleting pod "simpletest.rc-nbql4" in namespace "gc-3397" + Mar 7 03:43:58.135: INFO: Deleting pod "simpletest.rc-nlwfh" in namespace "gc-3397" + Mar 7 03:43:58.181: INFO: Deleting pod "simpletest.rc-nmftv" in namespace "gc-3397" + Mar 7 03:43:58.224: INFO: Deleting pod "simpletest.rc-nnm25" in namespace "gc-3397" + Mar 7 03:43:58.272: INFO: Deleting pod "simpletest.rc-nr86b" in namespace "gc-3397" + Mar 7 03:43:58.310: INFO: Deleting pod "simpletest.rc-p6gb4" in namespace "gc-3397" + Mar 7 03:43:58.379: INFO: Deleting pod "simpletest.rc-pk5tt" in namespace "gc-3397" + Mar 7 03:43:58.423: INFO: Deleting pod "simpletest.rc-qlslg" in namespace "gc-3397" + Mar 7 03:43:58.450: INFO: Deleting pod "simpletest.rc-qmjcz" in namespace "gc-3397" + Mar 7 03:43:58.482: INFO: Deleting pod "simpletest.rc-qnvnx" in namespace "gc-3397" + Mar 7 03:43:58.510: INFO: Deleting pod "simpletest.rc-rbv7w" in namespace "gc-3397" + Mar 7 03:43:58.548: INFO: Deleting pod "simpletest.rc-rdvtn" in namespace "gc-3397" + Mar 7 03:43:58.570: INFO: Deleting pod "simpletest.rc-rwxjk" in namespace "gc-3397" + Mar 7 03:43:58.587: INFO: Deleting pod "simpletest.rc-s2pw7" in namespace "gc-3397" + Mar 7 03:43:58.612: INFO: Deleting pod "simpletest.rc-twpgs" in namespace "gc-3397" + Mar 7 03:43:58.646: INFO: Deleting pod "simpletest.rc-v9t5q" in namespace "gc-3397" + Mar 7 03:43:58.692: INFO: Deleting pod "simpletest.rc-vhgnp" in namespace "gc-3397" + Mar 7 03:43:58.716: INFO: Deleting pod "simpletest.rc-vhvqq" in namespace "gc-3397" + Mar 7 03:43:58.761: INFO: Deleting pod "simpletest.rc-vkqxk" in namespace "gc-3397" + Mar 7 03:43:58.819: INFO: Deleting pod "simpletest.rc-vntdm" in namespace "gc-3397" + Mar 7 03:43:58.861: INFO: Deleting pod "simpletest.rc-x2nb5" in namespace "gc-3397" + Mar 7 03:43:58.886: INFO: Deleting pod "simpletest.rc-x9ndm" in namespace "gc-3397" + Mar 7 03:43:58.936: INFO: Deleting pod "simpletest.rc-xhq9j" in namespace "gc-3397" + Mar 7 03:43:59.003: INFO: Deleting pod "simpletest.rc-xjffw" in namespace "gc-3397" + Mar 7 03:43:59.045: INFO: Deleting pod "simpletest.rc-xkvgn" in namespace "gc-3397" + Mar 7 03:43:59.081: INFO: Deleting pod "simpletest.rc-xqv58" in namespace "gc-3397" + Mar 7 03:43:59.104: INFO: Deleting pod "simpletest.rc-xw7t9" in namespace "gc-3397" + Mar 7 03:43:59.163: INFO: Deleting pod "simpletest.rc-xx7h7" in namespace "gc-3397" + Mar 7 03:43:59.192: INFO: Deleting pod "simpletest.rc-z4ztb" in namespace "gc-3397" + Mar 7 03:43:59.251: INFO: Deleting pod "simpletest.rc-z96tk" in namespace "gc-3397" + Mar 7 03:43:59.289: INFO: Deleting pod "simpletest.rc-zdw7g" in namespace "gc-3397" + Mar 7 03:43:59.336: INFO: Deleting pod "simpletest.rc-zfhsk" in namespace "gc-3397" + Mar 7 03:43:59.399: INFO: Deleting pod "simpletest.rc-zjkm7" in namespace "gc-3397" + Mar 7 03:43:59.442: INFO: Deleting pod "simpletest.rc-zqfrr" in namespace "gc-3397" + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 + Mar 7 03:43:59.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "gc-3397" for this suite. 03/07/23 03:43:59.503 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:192 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:43:59.519 +Mar 7 03:43:59.521: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:43:59.522 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:43:59.562 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:43:59.565 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:192 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:43:59.576 +Mar 7 03:43:59.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d" in namespace "downward-api-7086" to be "Succeeded or Failed" +Mar 7 03:43:59.601: INFO: Pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077776ms +Mar 7 03:44:01.609: INFO: Pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014092421s +Mar 7 03:44:03.607: INFO: Pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012328283s +Mar 7 03:44:05.605: INFO: Pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010358293s +STEP: Saw pod success 03/07/23 03:44:05.605 +Mar 7 03:44:05.605: INFO: Pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d" satisfied condition "Succeeded or Failed" +Mar 7 03:44:05.608: INFO: Trying to get logs from node node-2 pod downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d container client-container: +STEP: delete the pod 03/07/23 03:44:05.618 +Mar 7 03:44:05.629: INFO: Waiting for pod downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d to disappear +Mar 7 03:44:05.631: INFO: Pod downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 03:44:05.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7086" for this suite. 03/07/23 03:44:05.634 +{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","completed":236,"skipped":4109,"failed":0} +------------------------------ +• [SLOW TEST] [6.119 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:192 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:43:59.519 + Mar 7 03:43:59.521: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:43:59.522 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:43:59.562 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:43:59.565 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:192 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:43:59.576 + Mar 7 03:43:59.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d" in namespace "downward-api-7086" to be "Succeeded or Failed" + Mar 7 03:43:59.601: INFO: Pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077776ms + Mar 7 03:44:01.609: INFO: Pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014092421s + Mar 7 03:44:03.607: INFO: Pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012328283s + Mar 7 03:44:05.605: INFO: Pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010358293s + STEP: Saw pod success 03/07/23 03:44:05.605 + Mar 7 03:44:05.605: INFO: Pod "downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d" satisfied condition "Succeeded or Failed" + Mar 7 03:44:05.608: INFO: Trying to get logs from node node-2 pod downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d container client-container: + STEP: delete the pod 03/07/23 03:44:05.618 + Mar 7 03:44:05.629: INFO: Waiting for pod downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d to disappear + Mar 7 03:44:05.631: INFO: Pod downwardapi-volume-aee14f92-4c78-4029-a4f3-220884a3cf4d no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 03:44:05.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-7086" for this suite. 03/07/23 03:44:05.634 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:158 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:05.64 +Mar 7 03:44:05.640: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename svcaccounts 03/07/23 03:44:05.641 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:05.66 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:05.663 +[It] should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:158 +Mar 7 03:44:05.678: INFO: created pod pod-service-account-defaultsa +Mar 7 03:44:05.678: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Mar 7 03:44:05.681: INFO: created pod pod-service-account-mountsa +Mar 7 03:44:05.681: INFO: pod pod-service-account-mountsa service account token volume mount: true +Mar 7 03:44:05.687: INFO: created pod pod-service-account-nomountsa +Mar 7 03:44:05.687: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Mar 7 03:44:05.692: INFO: created pod pod-service-account-defaultsa-mountspec +Mar 7 03:44:05.692: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Mar 7 03:44:05.698: INFO: created pod pod-service-account-mountsa-mountspec +Mar 7 03:44:05.698: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Mar 7 03:44:05.703: INFO: created pod pod-service-account-nomountsa-mountspec +Mar 7 03:44:05.703: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Mar 7 03:44:05.707: INFO: created pod pod-service-account-defaultsa-nomountspec +Mar 7 03:44:05.707: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Mar 7 03:44:05.713: INFO: created pod pod-service-account-mountsa-nomountspec +Mar 7 03:44:05.713: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Mar 7 03:44:05.719: INFO: created pod pod-service-account-nomountsa-nomountspec +Mar 7 03:44:05.719: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +Mar 7 03:44:05.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-4203" for this suite. 03/07/23 03:44:05.725 +{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","completed":237,"skipped":4158,"failed":0} +------------------------------ +• [0.092 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:158 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:05.64 + Mar 7 03:44:05.640: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename svcaccounts 03/07/23 03:44:05.641 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:05.66 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:05.663 + [It] should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:158 + Mar 7 03:44:05.678: INFO: created pod pod-service-account-defaultsa + Mar 7 03:44:05.678: INFO: pod pod-service-account-defaultsa service account token volume mount: true + Mar 7 03:44:05.681: INFO: created pod pod-service-account-mountsa + Mar 7 03:44:05.681: INFO: pod pod-service-account-mountsa service account token volume mount: true + Mar 7 03:44:05.687: INFO: created pod pod-service-account-nomountsa + Mar 7 03:44:05.687: INFO: pod pod-service-account-nomountsa service account token volume mount: false + Mar 7 03:44:05.692: INFO: created pod pod-service-account-defaultsa-mountspec + Mar 7 03:44:05.692: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true + Mar 7 03:44:05.698: INFO: created pod pod-service-account-mountsa-mountspec + Mar 7 03:44:05.698: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true + Mar 7 03:44:05.703: INFO: created pod pod-service-account-nomountsa-mountspec + Mar 7 03:44:05.703: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true + Mar 7 03:44:05.707: INFO: created pod pod-service-account-defaultsa-nomountspec + Mar 7 03:44:05.707: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false + Mar 7 03:44:05.713: INFO: created pod pod-service-account-mountsa-nomountspec + Mar 7 03:44:05.713: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false + Mar 7 03:44:05.719: INFO: created pod pod-service-account-nomountsa-nomountspec + Mar 7 03:44:05.719: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 + Mar 7 03:44:05.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "svcaccounts-4203" for this suite. 03/07/23 03:44:05.725 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:05.732 +Mar 7 03:44:05.732: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename deployment 03/07/23 03:44:05.733 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:05.75 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:05.753 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 +STEP: creating a Deployment 03/07/23 03:44:05.759 +STEP: waiting for Deployment to be created 03/07/23 03:44:05.763 +STEP: waiting for all Replicas to be Ready 03/07/23 03:44:05.765 +Mar 7 03:44:05.766: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Mar 7 03:44:05.766: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Mar 7 03:44:05.773: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Mar 7 03:44:05.773: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Mar 7 03:44:05.783: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Mar 7 03:44:05.783: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Mar 7 03:44:05.804: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Mar 7 03:44:05.804: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Mar 7 03:44:07.612: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Mar 7 03:44:07.612: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Mar 7 03:44:15.173: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment 03/07/23 03:44:15.173 +W0307 03:44:15.183508 22 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Mar 7 03:44:15.184: INFO: observed event type ADDED +STEP: waiting for Replicas to scale 03/07/23 03:44:15.184 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:15.192: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:15.192: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:15.224: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:15.224: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:15.250: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:15.250: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:15.255: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:15.255: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:19.431: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:19.431: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:19.449: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +STEP: listing Deployments 03/07/23 03:44:19.449 +Mar 7 03:44:19.456: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment 03/07/23 03:44:19.456 +Mar 7 03:44:19.465: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus 03/07/23 03:44:19.465 +Mar 7 03:44:19.474: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Mar 7 03:44:19.475: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Mar 7 03:44:19.495: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Mar 7 03:44:19.506: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Mar 7 03:44:19.512: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Mar 7 03:44:20.700: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Mar 7 03:44:20.784: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Mar 7 03:44:20.828: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Mar 7 03:44:20.836: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Mar 7 03:44:21.688: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus 03/07/23 03:44:21.702 +STEP: fetching the DeploymentStatus 03/07/23 03:44:21.708 +Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 +Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:21.712: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 3 +Mar 7 03:44:21.712: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:21.712: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 +Mar 7 03:44:21.712: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 3 +STEP: deleting the Deployment 03/07/23 03:44:21.712 +Mar 7 03:44:21.719: INFO: observed event type MODIFIED +Mar 7 03:44:21.719: INFO: observed event type MODIFIED +Mar 7 03:44:21.719: INFO: observed event type MODIFIED +Mar 7 03:44:21.719: INFO: observed event type MODIFIED +Mar 7 03:44:21.719: INFO: observed event type MODIFIED +Mar 7 03:44:21.720: INFO: observed event type MODIFIED +Mar 7 03:44:21.720: INFO: observed event type MODIFIED +Mar 7 03:44:21.720: INFO: observed event type MODIFIED +Mar 7 03:44:21.720: INFO: observed event type MODIFIED +Mar 7 03:44:21.720: INFO: observed event type MODIFIED +Mar 7 03:44:21.720: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Mar 7 03:44:21.723: INFO: Log out all the ReplicaSets if there is no deployment created +Mar 7 03:44:21.726: INFO: ReplicaSet "test-deployment-54cc775c4b": +&ReplicaSet{ObjectMeta:{test-deployment-54cc775c4b deployment-1905 1fb73d23-f39c-408a-9615-3d9187cb70d1 68419 4 2023-03-07 03:44:15 +0000 UTC map[pod-template-hash:54cc775c4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf 0xc005547c17 0xc005547c18}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:44:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:21 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 54cc775c4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:54cc775c4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005547ca0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Mar 7 03:44:21.731: INFO: pod: "test-deployment-54cc775c4b-4nbjs": +&Pod{ObjectMeta:{test-deployment-54cc775c4b-4nbjs test-deployment-54cc775c4b- deployment-1905 5a83dc25-44a1-4deb-affd-b8a55eafff72 68400 0 2023-03-07 03:44:15 +0000 UTC 2023-03-07 03:44:21 +0000 UTC 0xc00221da28 map[pod-template-hash:54cc775c4b test-deployment-static:true] map[cni.projectcalico.org/containerID:3187535d9a34629070b325762a657b86e028e7e4da8b26462517804b226818d9 cni.projectcalico.org/podIP: cni.projectcalico.org/podIPs:] [{apps/v1 ReplicaSet test-deployment-54cc775c4b 1fb73d23-f39c-408a-9615-3d9187cb70d1 0xc00221da57 0xc00221da58}] [] [{kube-controller-manager Update v1 2023-03-07 03:44:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fb73d23-f39c-408a-9615-3d9187cb70d1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.56\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cl7rv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cl7rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.56,StartTime:2023-03-07 03:44:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:44:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.8,ImageID:registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d,ContainerID:containerd://53b04e6ee15220cf6fa207292302ae77aa48abced405ea6b99db5e6086dafd13,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Mar 7 03:44:21.732: INFO: pod: "test-deployment-54cc775c4b-tk9sb": +&Pod{ObjectMeta:{test-deployment-54cc775c4b-tk9sb test-deployment-54cc775c4b- deployment-1905 0bf95d04-b688-4cad-a96f-d41cec53d082 68416 0 2023-03-07 03:44:19 +0000 UTC 2023-03-07 03:44:22 +0000 UTC 0xc00221dc40 map[pod-template-hash:54cc775c4b test-deployment-static:true] map[cni.projectcalico.org/containerID:1da80af16c3848f7a19024b54481f24ff90233d6e4ec170f04b79da1d18abebc cni.projectcalico.org/podIP:10.233.84.144/32 cni.projectcalico.org/podIPs:10.233.84.144/32] [{apps/v1 ReplicaSet test-deployment-54cc775c4b 1fb73d23-f39c-408a-9615-3d9187cb70d1 0xc00221dc97 0xc00221dc98}] [] [{calico Update v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fb73d23-f39c-408a-9615-3d9187cb70d1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.144\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jf4j9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jf4j9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.144,StartTime:2023-03-07 03:44:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:44:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.8,ImageID:registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d,ContainerID:containerd://0fa93e9806622191ba145c3b7a227eef6ff947d9cf4c0416fba73dc10f08bf0e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Mar 7 03:44:21.732: INFO: ReplicaSet "test-deployment-7c7d8d58c8": +&ReplicaSet{ObjectMeta:{test-deployment-7c7d8d58c8 deployment-1905 b8352d7a-36ec-44ca-a65d-2526333af30a 68412 2 2023-03-07 03:44:19 +0000 UTC map[pod-template-hash:7c7d8d58c8 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf 0xc005547d07 0xc005547d08}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:21 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c7d8d58c8,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7c7d8d58c8 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005547da0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Mar 7 03:44:21.738: INFO: pod: "test-deployment-7c7d8d58c8-lvcqn": +&Pod{ObjectMeta:{test-deployment-7c7d8d58c8-lvcqn test-deployment-7c7d8d58c8- deployment-1905 373853d1-f303-432d-b994-a0e6f276a640 68411 0 2023-03-07 03:44:20 +0000 UTC map[pod-template-hash:7c7d8d58c8 test-deployment-static:true] map[cni.projectcalico.org/containerID:338aaf22cfcd71c9bfc2ef2bc60718bda7fb2fc9a85b8d670b75ec8663a89b2d cni.projectcalico.org/podIP:10.233.84.143/32 cni.projectcalico.org/podIPs:10.233.84.143/32] [{apps/v1 ReplicaSet test-deployment-7c7d8d58c8 b8352d7a-36ec-44ca-a65d-2526333af30a 0xc003e65387 0xc003e65388}] [] [{kube-controller-manager Update v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8352d7a-36ec-44ca-a65d-2526333af30a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:44:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:44:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.143\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9pd22,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9pd22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.143,StartTime:2023-03-07 03:44:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:44:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://61e7cc6bca6ffc15f48ae4f345ecab8303965fd8bae07608eb74b47fad8243d0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Mar 7 03:44:21.738: INFO: pod: "test-deployment-7c7d8d58c8-zz42p": +&Pod{ObjectMeta:{test-deployment-7c7d8d58c8-zz42p test-deployment-7c7d8d58c8- deployment-1905 b14f1a08-dfe0-449f-b246-f54ba3557c49 68376 0 2023-03-07 03:44:19 +0000 UTC map[pod-template-hash:7c7d8d58c8 test-deployment-static:true] map[cni.projectcalico.org/containerID:8f51048b693ca4c1a10ae52aeb117232232409c36285a3121eac081543c1203e cni.projectcalico.org/podIP:10.233.247.23/32 cni.projectcalico.org/podIPs:10.233.247.23/32] [{apps/v1 ReplicaSet test-deployment-7c7d8d58c8 b8352d7a-36ec-44ca-a65d-2526333af30a 0xc003e655b7 0xc003e655b8}] [] [{kube-controller-manager Update v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8352d7a-36ec-44ca-a65d-2526333af30a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.23\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bvf5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bvf5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.23,StartTime:2023-03-07 03:44:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:44:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://e2fc2178e0d4583924ff0311a77ca5da11e64d53e28c67d7af3af5bf3b2a9eec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Mar 7 03:44:21.738: INFO: ReplicaSet "test-deployment-8594bb6fdd": +&ReplicaSet{ObjectMeta:{test-deployment-8594bb6fdd deployment-1905 cf9d739b-e0e7-4c74-b4ae-a39ab8a31d64 68324 3 2023-03-07 03:44:05 +0000 UTC map[pod-template-hash:8594bb6fdd test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf 0xc005547e07 0xc005547e08}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 8594bb6fdd,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:8594bb6fdd test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005547e90 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +Mar 7 03:44:21.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1905" for this suite. 03/07/23 03:44:21.745 +{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","completed":238,"skipped":4170,"failed":0} +------------------------------ +• [SLOW TEST] [16.020 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:05.732 + Mar 7 03:44:05.732: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename deployment 03/07/23 03:44:05.733 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:05.75 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:05.753 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 + STEP: creating a Deployment 03/07/23 03:44:05.759 + STEP: waiting for Deployment to be created 03/07/23 03:44:05.763 + STEP: waiting for all Replicas to be Ready 03/07/23 03:44:05.765 + Mar 7 03:44:05.766: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Mar 7 03:44:05.766: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Mar 7 03:44:05.773: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Mar 7 03:44:05.773: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Mar 7 03:44:05.783: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Mar 7 03:44:05.783: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Mar 7 03:44:05.804: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Mar 7 03:44:05.804: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Mar 7 03:44:07.612: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment-static:true] + Mar 7 03:44:07.612: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment-static:true] + Mar 7 03:44:15.173: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 and labels map[test-deployment-static:true] + STEP: patching the Deployment 03/07/23 03:44:15.173 + W0307 03:44:15.183508 22 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Mar 7 03:44:15.184: INFO: observed event type ADDED + STEP: waiting for Replicas to scale 03/07/23 03:44:15.184 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 0 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:15.185: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:15.192: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:15.192: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:15.224: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:15.224: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:15.250: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:15.250: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:15.255: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:15.255: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:19.431: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:19.431: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:19.449: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + STEP: listing Deployments 03/07/23 03:44:19.449 + Mar 7 03:44:19.456: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] + STEP: updating the Deployment 03/07/23 03:44:19.456 + Mar 7 03:44:19.465: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + STEP: fetching the DeploymentStatus 03/07/23 03:44:19.465 + Mar 7 03:44:19.474: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Mar 7 03:44:19.475: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Mar 7 03:44:19.495: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Mar 7 03:44:19.506: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Mar 7 03:44:19.512: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Mar 7 03:44:20.700: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Mar 7 03:44:20.784: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] + Mar 7 03:44:20.828: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Mar 7 03:44:20.836: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Mar 7 03:44:21.688: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] + STEP: patching the DeploymentStatus 03/07/23 03:44:21.702 + STEP: fetching the DeploymentStatus 03/07/23 03:44:21.708 + Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 1 + Mar 7 03:44:21.711: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:21.712: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 3 + Mar 7 03:44:21.712: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:21.712: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 2 + Mar 7 03:44:21.712: INFO: observed Deployment test-deployment in namespace deployment-1905 with ReadyReplicas 3 + STEP: deleting the Deployment 03/07/23 03:44:21.712 + Mar 7 03:44:21.719: INFO: observed event type MODIFIED + Mar 7 03:44:21.719: INFO: observed event type MODIFIED + Mar 7 03:44:21.719: INFO: observed event type MODIFIED + Mar 7 03:44:21.719: INFO: observed event type MODIFIED + Mar 7 03:44:21.719: INFO: observed event type MODIFIED + Mar 7 03:44:21.720: INFO: observed event type MODIFIED + Mar 7 03:44:21.720: INFO: observed event type MODIFIED + Mar 7 03:44:21.720: INFO: observed event type MODIFIED + Mar 7 03:44:21.720: INFO: observed event type MODIFIED + Mar 7 03:44:21.720: INFO: observed event type MODIFIED + Mar 7 03:44:21.720: INFO: observed event type MODIFIED + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Mar 7 03:44:21.723: INFO: Log out all the ReplicaSets if there is no deployment created + Mar 7 03:44:21.726: INFO: ReplicaSet "test-deployment-54cc775c4b": + &ReplicaSet{ObjectMeta:{test-deployment-54cc775c4b deployment-1905 1fb73d23-f39c-408a-9615-3d9187cb70d1 68419 4 2023-03-07 03:44:15 +0000 UTC map[pod-template-hash:54cc775c4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf 0xc005547c17 0xc005547c18}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:44:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:21 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 54cc775c4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:54cc775c4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005547ca0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + + Mar 7 03:44:21.731: INFO: pod: "test-deployment-54cc775c4b-4nbjs": + &Pod{ObjectMeta:{test-deployment-54cc775c4b-4nbjs test-deployment-54cc775c4b- deployment-1905 5a83dc25-44a1-4deb-affd-b8a55eafff72 68400 0 2023-03-07 03:44:15 +0000 UTC 2023-03-07 03:44:21 +0000 UTC 0xc00221da28 map[pod-template-hash:54cc775c4b test-deployment-static:true] map[cni.projectcalico.org/containerID:3187535d9a34629070b325762a657b86e028e7e4da8b26462517804b226818d9 cni.projectcalico.org/podIP: cni.projectcalico.org/podIPs:] [{apps/v1 ReplicaSet test-deployment-54cc775c4b 1fb73d23-f39c-408a-9615-3d9187cb70d1 0xc00221da57 0xc00221da58}] [] [{kube-controller-manager Update v1 2023-03-07 03:44:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fb73d23-f39c-408a-9615-3d9187cb70d1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.56\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cl7rv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cl7rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.56,StartTime:2023-03-07 03:44:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:44:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.8,ImageID:registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d,ContainerID:containerd://53b04e6ee15220cf6fa207292302ae77aa48abced405ea6b99db5e6086dafd13,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Mar 7 03:44:21.732: INFO: pod: "test-deployment-54cc775c4b-tk9sb": + &Pod{ObjectMeta:{test-deployment-54cc775c4b-tk9sb test-deployment-54cc775c4b- deployment-1905 0bf95d04-b688-4cad-a96f-d41cec53d082 68416 0 2023-03-07 03:44:19 +0000 UTC 2023-03-07 03:44:22 +0000 UTC 0xc00221dc40 map[pod-template-hash:54cc775c4b test-deployment-static:true] map[cni.projectcalico.org/containerID:1da80af16c3848f7a19024b54481f24ff90233d6e4ec170f04b79da1d18abebc cni.projectcalico.org/podIP:10.233.84.144/32 cni.projectcalico.org/podIPs:10.233.84.144/32] [{apps/v1 ReplicaSet test-deployment-54cc775c4b 1fb73d23-f39c-408a-9615-3d9187cb70d1 0xc00221dc97 0xc00221dc98}] [] [{calico Update v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fb73d23-f39c-408a-9615-3d9187cb70d1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.144\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jf4j9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jf4j9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.144,StartTime:2023-03-07 03:44:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:44:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.8,ImageID:registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d,ContainerID:containerd://0fa93e9806622191ba145c3b7a227eef6ff947d9cf4c0416fba73dc10f08bf0e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Mar 7 03:44:21.732: INFO: ReplicaSet "test-deployment-7c7d8d58c8": + &ReplicaSet{ObjectMeta:{test-deployment-7c7d8d58c8 deployment-1905 b8352d7a-36ec-44ca-a65d-2526333af30a 68412 2 2023-03-07 03:44:19 +0000 UTC map[pod-template-hash:7c7d8d58c8 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf 0xc005547d07 0xc005547d08}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:21 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c7d8d58c8,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7c7d8d58c8 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005547da0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + + Mar 7 03:44:21.738: INFO: pod: "test-deployment-7c7d8d58c8-lvcqn": + &Pod{ObjectMeta:{test-deployment-7c7d8d58c8-lvcqn test-deployment-7c7d8d58c8- deployment-1905 373853d1-f303-432d-b994-a0e6f276a640 68411 0 2023-03-07 03:44:20 +0000 UTC map[pod-template-hash:7c7d8d58c8 test-deployment-static:true] map[cni.projectcalico.org/containerID:338aaf22cfcd71c9bfc2ef2bc60718bda7fb2fc9a85b8d670b75ec8663a89b2d cni.projectcalico.org/podIP:10.233.84.143/32 cni.projectcalico.org/podIPs:10.233.84.143/32] [{apps/v1 ReplicaSet test-deployment-7c7d8d58c8 b8352d7a-36ec-44ca-a65d-2526333af30a 0xc003e65387 0xc003e65388}] [] [{kube-controller-manager Update v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8352d7a-36ec-44ca-a65d-2526333af30a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:44:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:44:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.143\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9pd22,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9pd22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.143,StartTime:2023-03-07 03:44:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:44:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://61e7cc6bca6ffc15f48ae4f345ecab8303965fd8bae07608eb74b47fad8243d0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Mar 7 03:44:21.738: INFO: pod: "test-deployment-7c7d8d58c8-zz42p": + &Pod{ObjectMeta:{test-deployment-7c7d8d58c8-zz42p test-deployment-7c7d8d58c8- deployment-1905 b14f1a08-dfe0-449f-b246-f54ba3557c49 68376 0 2023-03-07 03:44:19 +0000 UTC map[pod-template-hash:7c7d8d58c8 test-deployment-static:true] map[cni.projectcalico.org/containerID:8f51048b693ca4c1a10ae52aeb117232232409c36285a3121eac081543c1203e cni.projectcalico.org/podIP:10.233.247.23/32 cni.projectcalico.org/podIPs:10.233.247.23/32] [{apps/v1 ReplicaSet test-deployment-7c7d8d58c8 b8352d7a-36ec-44ca-a65d-2526333af30a 0xc003e655b7 0xc003e655b8}] [] [{kube-controller-manager Update v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8352d7a-36ec-44ca-a65d-2526333af30a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:44:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.23\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bvf5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bvf5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.23,StartTime:2023-03-07 03:44:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:44:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://e2fc2178e0d4583924ff0311a77ca5da11e64d53e28c67d7af3af5bf3b2a9eec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Mar 7 03:44:21.738: INFO: ReplicaSet "test-deployment-8594bb6fdd": + &ReplicaSet{ObjectMeta:{test-deployment-8594bb6fdd deployment-1905 cf9d739b-e0e7-4c74-b4ae-a39ab8a31d64 68324 3 2023-03-07 03:44:05 +0000 UTC map[pod-template-hash:8594bb6fdd test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf 0xc005547e07 0xc005547e08}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1587a7b1-b96b-4c7d-8ae7-54cf6554a5bf\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:19 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 8594bb6fdd,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:8594bb6fdd test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005547e90 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + + [AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 + Mar 7 03:44:21.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "deployment-1905" for this suite. 03/07/23 03:44:21.745 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:443 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:21.754 +Mar 7 03:44:21.754: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 03:44:21.754 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:21.772 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:21.775 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 +[It] should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:443 +Mar 7 03:44:21.789: INFO: Waiting up to 5m0s for pod "server-envvars-b287f49e-546a-446f-b529-0b236076380a" in namespace "pods-5104" to be "running and ready" +Mar 7 03:44:21.797: INFO: Pod "server-envvars-b287f49e-546a-446f-b529-0b236076380a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.990904ms +Mar 7 03:44:21.797: INFO: The phase of Pod server-envvars-b287f49e-546a-446f-b529-0b236076380a is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:44:23.804: INFO: Pod "server-envvars-b287f49e-546a-446f-b529-0b236076380a": Phase="Running", Reason="", readiness=true. Elapsed: 2.014350166s +Mar 7 03:44:23.804: INFO: The phase of Pod server-envvars-b287f49e-546a-446f-b529-0b236076380a is Running (Ready = true) +Mar 7 03:44:23.804: INFO: Pod "server-envvars-b287f49e-546a-446f-b529-0b236076380a" satisfied condition "running and ready" +Mar 7 03:44:23.827: INFO: Waiting up to 5m0s for pod "client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329" in namespace "pods-5104" to be "Succeeded or Failed" +Mar 7 03:44:23.835: INFO: Pod "client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329": Phase="Pending", Reason="", readiness=false. Elapsed: 7.845928ms +Mar 7 03:44:25.838: INFO: Pod "client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010745803s +Mar 7 03:44:27.841: INFO: Pod "client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013829037s +STEP: Saw pod success 03/07/23 03:44:27.841 +Mar 7 03:44:27.841: INFO: Pod "client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329" satisfied condition "Succeeded or Failed" +Mar 7 03:44:27.843: INFO: Trying to get logs from node node-2 pod client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329 container env3cont: +STEP: delete the pod 03/07/23 03:44:27.848 +Mar 7 03:44:27.857: INFO: Waiting for pod client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329 to disappear +Mar 7 03:44:27.860: INFO: Pod client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329 no longer exists +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +Mar 7 03:44:27.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5104" for this suite. 03/07/23 03:44:27.863 +{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","completed":239,"skipped":4198,"failed":0} +------------------------------ +• [SLOW TEST] [6.115 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:443 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:21.754 + Mar 7 03:44:21.754: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 03:44:21.754 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:21.772 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:21.775 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 + [It] should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:443 + Mar 7 03:44:21.789: INFO: Waiting up to 5m0s for pod "server-envvars-b287f49e-546a-446f-b529-0b236076380a" in namespace "pods-5104" to be "running and ready" + Mar 7 03:44:21.797: INFO: Pod "server-envvars-b287f49e-546a-446f-b529-0b236076380a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.990904ms + Mar 7 03:44:21.797: INFO: The phase of Pod server-envvars-b287f49e-546a-446f-b529-0b236076380a is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:44:23.804: INFO: Pod "server-envvars-b287f49e-546a-446f-b529-0b236076380a": Phase="Running", Reason="", readiness=true. Elapsed: 2.014350166s + Mar 7 03:44:23.804: INFO: The phase of Pod server-envvars-b287f49e-546a-446f-b529-0b236076380a is Running (Ready = true) + Mar 7 03:44:23.804: INFO: Pod "server-envvars-b287f49e-546a-446f-b529-0b236076380a" satisfied condition "running and ready" + Mar 7 03:44:23.827: INFO: Waiting up to 5m0s for pod "client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329" in namespace "pods-5104" to be "Succeeded or Failed" + Mar 7 03:44:23.835: INFO: Pod "client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329": Phase="Pending", Reason="", readiness=false. Elapsed: 7.845928ms + Mar 7 03:44:25.838: INFO: Pod "client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010745803s + Mar 7 03:44:27.841: INFO: Pod "client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013829037s + STEP: Saw pod success 03/07/23 03:44:27.841 + Mar 7 03:44:27.841: INFO: Pod "client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329" satisfied condition "Succeeded or Failed" + Mar 7 03:44:27.843: INFO: Trying to get logs from node node-2 pod client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329 container env3cont: + STEP: delete the pod 03/07/23 03:44:27.848 + Mar 7 03:44:27.857: INFO: Waiting for pod client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329 to disappear + Mar 7 03:44:27.860: INFO: Pod client-envvars-b35d46d2-80ad-4381-8684-b0ee34a30329 no longer exists + [AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 + Mar 7 03:44:27.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-5104" for this suite. 03/07/23 03:44:27.863 + << End Captured GinkgoWriter Output +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:340 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:27.869 +Mar 7 03:44:27.869: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:44:27.87 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:27.883 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:27.886 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:44:27.899 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:44:28.519 +STEP: Deploying the webhook pod 03/07/23 03:44:28.525 +STEP: Wait for the deployment to be ready 03/07/23 03:44:28.54 +Mar 7 03:44:28.545: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 03:44:30.554 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:44:30.566 +Mar 7 03:44:31.567: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:340 +Mar 7 03:44:31.570: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1585-crds.webhook.example.com via the AdmissionRegistration API 03/07/23 03:44:32.079 +STEP: Creating a custom resource that should be mutated by the webhook 03/07/23 03:44:32.103 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:44:34.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2911" for this suite. 03/07/23 03:44:34.68 +STEP: Destroying namespace "webhook-2911-markers" for this suite. 03/07/23 03:44:34.686 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","completed":240,"skipped":4198,"failed":0} +------------------------------ +• [SLOW TEST] [6.874 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:340 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:27.869 + Mar 7 03:44:27.869: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:44:27.87 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:27.883 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:27.886 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:44:27.899 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:44:28.519 + STEP: Deploying the webhook pod 03/07/23 03:44:28.525 + STEP: Wait for the deployment to be ready 03/07/23 03:44:28.54 + Mar 7 03:44:28.545: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 03:44:30.554 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:44:30.566 + Mar 7 03:44:31.567: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:340 + Mar 7 03:44:31.570: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1585-crds.webhook.example.com via the AdmissionRegistration API 03/07/23 03:44:32.079 + STEP: Creating a custom resource that should be mutated by the webhook 03/07/23 03:44:32.103 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:44:34.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-2911" for this suite. 03/07/23 03:44:34.68 + STEP: Destroying namespace "webhook-2911-markers" for this suite. 03/07/23 03:44:34.686 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1248 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:34.745 +Mar 7 03:44:34.745: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:44:34.747 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:34.786 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:34.796 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1248 +STEP: validating cluster-info 03/07/23 03:44:34.798 +Mar 7 03:44:34.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5866 cluster-info' +Mar 7 03:44:35.106: INFO: stderr: "" +Mar 7 03:44:35.106: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:44:35.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5866" for this suite. 03/07/23 03:44:35.12 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","completed":241,"skipped":4237,"failed":0} +------------------------------ +• [0.384 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl cluster-info + test/e2e/kubectl/kubectl.go:1242 + should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1248 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:34.745 + Mar 7 03:44:34.745: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:44:34.747 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:34.786 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:34.796 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1248 + STEP: validating cluster-info 03/07/23 03:44:34.798 + Mar 7 03:44:34.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-5866 cluster-info' + Mar 7 03:44:35.106: INFO: stderr: "" + Mar 7 03:44:35.106: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:44:35.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-5866" for this suite. 03/07/23 03:44:35.12 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:225 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:35.13 +Mar 7 03:44:35.130: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 03:44:35.13 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:35.145 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:35.148 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 +[It] should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:225 +STEP: creating the pod 03/07/23 03:44:35.151 +STEP: setting up watch 03/07/23 03:44:35.151 +STEP: submitting the pod to kubernetes 03/07/23 03:44:35.255 +STEP: verifying the pod is in kubernetes 03/07/23 03:44:35.261 +STEP: verifying pod creation was observed 03/07/23 03:44:35.264 +Mar 7 03:44:35.264: INFO: Waiting up to 5m0s for pod "pod-submit-remove-322bc002-527c-46e6-91fb-5de77a0ab5b9" in namespace "pods-9628" to be "running" +Mar 7 03:44:35.267: INFO: Pod "pod-submit-remove-322bc002-527c-46e6-91fb-5de77a0ab5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.142524ms +Mar 7 03:44:37.271: INFO: Pod "pod-submit-remove-322bc002-527c-46e6-91fb-5de77a0ab5b9": Phase="Running", Reason="", readiness=true. Elapsed: 2.007080661s +Mar 7 03:44:37.271: INFO: Pod "pod-submit-remove-322bc002-527c-46e6-91fb-5de77a0ab5b9" satisfied condition "running" +STEP: deleting the pod gracefully 03/07/23 03:44:37.273 +STEP: verifying pod deletion was observed 03/07/23 03:44:37.28 +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +Mar 7 03:44:39.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9628" for this suite. 03/07/23 03:44:39.875 +{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","completed":242,"skipped":4264,"failed":0} +------------------------------ +• [4.751 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:225 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:35.13 + Mar 7 03:44:35.130: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 03:44:35.13 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:35.145 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:35.148 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 + [It] should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:225 + STEP: creating the pod 03/07/23 03:44:35.151 + STEP: setting up watch 03/07/23 03:44:35.151 + STEP: submitting the pod to kubernetes 03/07/23 03:44:35.255 + STEP: verifying the pod is in kubernetes 03/07/23 03:44:35.261 + STEP: verifying pod creation was observed 03/07/23 03:44:35.264 + Mar 7 03:44:35.264: INFO: Waiting up to 5m0s for pod "pod-submit-remove-322bc002-527c-46e6-91fb-5de77a0ab5b9" in namespace "pods-9628" to be "running" + Mar 7 03:44:35.267: INFO: Pod "pod-submit-remove-322bc002-527c-46e6-91fb-5de77a0ab5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.142524ms + Mar 7 03:44:37.271: INFO: Pod "pod-submit-remove-322bc002-527c-46e6-91fb-5de77a0ab5b9": Phase="Running", Reason="", readiness=true. Elapsed: 2.007080661s + Mar 7 03:44:37.271: INFO: Pod "pod-submit-remove-322bc002-527c-46e6-91fb-5de77a0ab5b9" satisfied condition "running" + STEP: deleting the pod gracefully 03/07/23 03:44:37.273 + STEP: verifying pod deletion was observed 03/07/23 03:44:37.28 + [AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 + Mar 7 03:44:39.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-9628" for this suite. 03/07/23 03:44:39.875 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:56 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:39.886 +Mar 7 03:44:39.886: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:44:39.887 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:39.92 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:39.923 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:56 +STEP: Creating secret with name secret-test-57677f1f-0f85-4d8f-a6d7-654240f3b73f 03/07/23 03:44:39.925 +STEP: Creating a pod to test consume secrets 03/07/23 03:44:39.929 +Mar 7 03:44:39.939: INFO: Waiting up to 5m0s for pod "pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7" in namespace "secrets-5962" to be "Succeeded or Failed" +Mar 7 03:44:39.947: INFO: Pod "pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.54547ms +Mar 7 03:44:41.950: INFO: Pod "pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011234782s +Mar 7 03:44:43.950: INFO: Pod "pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010564849s +STEP: Saw pod success 03/07/23 03:44:43.95 +Mar 7 03:44:43.950: INFO: Pod "pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7" satisfied condition "Succeeded or Failed" +Mar 7 03:44:43.953: INFO: Trying to get logs from node node-2 pod pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7 container secret-volume-test: +STEP: delete the pod 03/07/23 03:44:43.96 +Mar 7 03:44:43.983: INFO: Waiting for pod pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7 to disappear +Mar 7 03:44:43.986: INFO: Pod pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:44:43.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-5962" for this suite. 03/07/23 03:44:43.989 +{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","completed":243,"skipped":4304,"failed":0} +------------------------------ +• [4.108 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:56 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:39.886 + Mar 7 03:44:39.886: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:44:39.887 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:39.92 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:39.923 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:56 + STEP: Creating secret with name secret-test-57677f1f-0f85-4d8f-a6d7-654240f3b73f 03/07/23 03:44:39.925 + STEP: Creating a pod to test consume secrets 03/07/23 03:44:39.929 + Mar 7 03:44:39.939: INFO: Waiting up to 5m0s for pod "pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7" in namespace "secrets-5962" to be "Succeeded or Failed" + Mar 7 03:44:39.947: INFO: Pod "pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.54547ms + Mar 7 03:44:41.950: INFO: Pod "pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011234782s + Mar 7 03:44:43.950: INFO: Pod "pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010564849s + STEP: Saw pod success 03/07/23 03:44:43.95 + Mar 7 03:44:43.950: INFO: Pod "pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7" satisfied condition "Succeeded or Failed" + Mar 7 03:44:43.953: INFO: Trying to get logs from node node-2 pod pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7 container secret-volume-test: + STEP: delete the pod 03/07/23 03:44:43.96 + Mar 7 03:44:43.983: INFO: Waiting for pod pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7 to disappear + Mar 7 03:44:43.986: INFO: Pod pod-secrets-4b3dea75-1668-493a-b1a3-6a1b3a336eb7 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:44:43.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-5962" for this suite. 03/07/23 03:44:43.989 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3206 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:43.995 +Mar 7 03:44:43.995: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:44:43.996 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:44.008 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:44.01 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3206 +STEP: fetching services 03/07/23 03:44:44.012 +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:44:44.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5595" for this suite. 03/07/23 03:44:44.021 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","completed":244,"skipped":4331,"failed":0} +------------------------------ +• [0.037 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3206 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:43.995 + Mar 7 03:44:43.995: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:44:43.996 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:44.008 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:44.01 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3206 + STEP: fetching services 03/07/23 03:44:44.012 + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:44:44.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-5595" for this suite. 03/07/23 03:44:44.021 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-architecture] Conformance Tests + should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 +[BeforeEach] [sig-architecture] Conformance Tests + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:44.032 +Mar 7 03:44:44.033: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename conformance-tests 03/07/23 03:44:44.033 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:44.045 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:44.047 +[It] should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 +STEP: Getting node addresses 03/07/23 03:44:44.049 +Mar 7 03:44:44.049: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +[AfterEach] [sig-architecture] Conformance Tests + test/e2e/framework/framework.go:187 +Mar 7 03:44:44.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "conformance-tests-8247" for this suite. 03/07/23 03:44:44.056 +{"msg":"PASSED [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]","completed":245,"skipped":4332,"failed":0} +------------------------------ +• [0.028 seconds] +[sig-architecture] Conformance Tests +test/e2e/architecture/framework.go:23 + should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-architecture] Conformance Tests + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:44.032 + Mar 7 03:44:44.033: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename conformance-tests 03/07/23 03:44:44.033 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:44.045 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:44.047 + [It] should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 + STEP: Getting node addresses 03/07/23 03:44:44.049 + Mar 7 03:44:44.049: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + [AfterEach] [sig-architecture] Conformance Tests + test/e2e/framework/framework.go:187 + Mar 7 03:44:44.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "conformance-tests-8247" for this suite. 03/07/23 03:44:44.056 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:137 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:44.061 +Mar 7 03:44:44.061: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:44:44.062 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:44.075 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:44.077 +[It] should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:137 +STEP: Creating configMap that has name configmap-test-emptyKey-3304513c-8eb8-495e-b864-d97f070ebfec 03/07/23 03:44:44.079 +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:44:44.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9723" for this suite. 03/07/23 03:44:44.083 +{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","completed":246,"skipped":4349,"failed":0} +------------------------------ +• [0.027 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:137 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:44.061 + Mar 7 03:44:44.061: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:44:44.062 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:44.075 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:44.077 + [It] should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:137 + STEP: Creating configMap that has name configmap-test-emptyKey-3304513c-8eb8-495e-b864-d97f070ebfec 03/07/23 03:44:44.079 + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:44:44.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-9723" for this suite. 03/07/23 03:44:44.083 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:165 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:44.089 +Mar 7 03:44:44.089: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename daemonsets 03/07/23 03:44:44.09 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:44.104 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:44.106 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:165 +STEP: Creating simple DaemonSet "daemon-set" 03/07/23 03:44:44.121 +STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 03:44:44.125 +Mar 7 03:44:44.131: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:44:44.131: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 03:44:45.138: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:44:45.138: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 03:44:46.137: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Mar 7 03:44:46.137: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Stop a daemon pod, check that the daemon pod is revived. 03/07/23 03:44:46.139 +Mar 7 03:44:46.158: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Mar 7 03:44:46.158: INFO: Node node-1 is running 0 daemon pod, expected 1 +Mar 7 03:44:47.164: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Mar 7 03:44:47.164: INFO: Node node-1 is running 0 daemon pod, expected 1 +Mar 7 03:44:48.165: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Mar 7 03:44:48.165: INFO: Node node-1 is running 0 daemon pod, expected 1 +Mar 7 03:44:49.171: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Mar 7 03:44:49.171: INFO: Node node-1 is running 0 daemon pod, expected 1 +Mar 7 03:44:50.164: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Mar 7 03:44:50.164: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" 03/07/23 03:44:50.167 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4135, will wait for the garbage collector to delete the pods 03/07/23 03:44:50.167 +Mar 7 03:44:50.228: INFO: Deleting DaemonSet.extensions daemon-set took: 8.355627ms +Mar 7 03:44:50.328: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.228653ms +Mar 7 03:44:52.830: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:44:52.830: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Mar 7 03:44:52.832: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"68957"},"items":null} + +Mar 7 03:44:52.834: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"68957"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:44:52.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4135" for this suite. 03/07/23 03:44:52.846 +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","completed":247,"skipped":4357,"failed":0} +------------------------------ +• [SLOW TEST] [8.761 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:165 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:44.089 + Mar 7 03:44:44.089: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename daemonsets 03/07/23 03:44:44.09 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:44.104 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:44.106 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 + [It] should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:165 + STEP: Creating simple DaemonSet "daemon-set" 03/07/23 03:44:44.121 + STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 03:44:44.125 + Mar 7 03:44:44.131: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:44:44.131: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 03:44:45.138: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:44:45.138: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 03:44:46.137: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Mar 7 03:44:46.137: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Stop a daemon pod, check that the daemon pod is revived. 03/07/23 03:44:46.139 + Mar 7 03:44:46.158: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Mar 7 03:44:46.158: INFO: Node node-1 is running 0 daemon pod, expected 1 + Mar 7 03:44:47.164: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Mar 7 03:44:47.164: INFO: Node node-1 is running 0 daemon pod, expected 1 + Mar 7 03:44:48.165: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Mar 7 03:44:48.165: INFO: Node node-1 is running 0 daemon pod, expected 1 + Mar 7 03:44:49.171: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Mar 7 03:44:49.171: INFO: Node node-1 is running 0 daemon pod, expected 1 + Mar 7 03:44:50.164: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Mar 7 03:44:50.164: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 + STEP: Deleting DaemonSet "daemon-set" 03/07/23 03:44:50.167 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4135, will wait for the garbage collector to delete the pods 03/07/23 03:44:50.167 + Mar 7 03:44:50.228: INFO: Deleting DaemonSet.extensions daemon-set took: 8.355627ms + Mar 7 03:44:50.328: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.228653ms + Mar 7 03:44:52.830: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:44:52.830: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Mar 7 03:44:52.832: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"68957"},"items":null} + + Mar 7 03:44:52.834: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"68957"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:44:52.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "daemonsets-4135" for this suite. 03/07/23 03:44:52.846 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 +[BeforeEach] [sig-network] IngressClass API + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:52.851 +Mar 7 03:44:52.851: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename ingressclass 03/07/23 03:44:52.852 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:52.864 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:52.866 +[BeforeEach] [sig-network] IngressClass API + test/e2e/network/ingressclass.go:211 +[It] should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 +STEP: getting /apis 03/07/23 03:44:52.868 +STEP: getting /apis/networking.k8s.io 03/07/23 03:44:52.869 +STEP: getting /apis/networking.k8s.iov1 03/07/23 03:44:52.869 +STEP: creating 03/07/23 03:44:52.87 +STEP: getting 03/07/23 03:44:52.88 +STEP: listing 03/07/23 03:44:52.882 +STEP: watching 03/07/23 03:44:52.884 +Mar 7 03:44:52.884: INFO: starting watch +STEP: patching 03/07/23 03:44:52.884 +STEP: updating 03/07/23 03:44:52.888 +Mar 7 03:44:52.891: INFO: waiting for watch events with expected annotations +Mar 7 03:44:52.891: INFO: saw patched and updated annotations +STEP: deleting 03/07/23 03:44:52.891 +STEP: deleting a collection 03/07/23 03:44:52.898 +[AfterEach] [sig-network] IngressClass API + test/e2e/framework/framework.go:187 +Mar 7 03:44:52.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-4765" for this suite. 03/07/23 03:44:52.911 +{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","completed":248,"skipped":4386,"failed":0} +------------------------------ +• [0.063 seconds] +[sig-network] IngressClass API +test/e2e/network/common/framework.go:23 + should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] IngressClass API + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:52.851 + Mar 7 03:44:52.851: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename ingressclass 03/07/23 03:44:52.852 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:52.864 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:52.866 + [BeforeEach] [sig-network] IngressClass API + test/e2e/network/ingressclass.go:211 + [It] should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 + STEP: getting /apis 03/07/23 03:44:52.868 + STEP: getting /apis/networking.k8s.io 03/07/23 03:44:52.869 + STEP: getting /apis/networking.k8s.iov1 03/07/23 03:44:52.869 + STEP: creating 03/07/23 03:44:52.87 + STEP: getting 03/07/23 03:44:52.88 + STEP: listing 03/07/23 03:44:52.882 + STEP: watching 03/07/23 03:44:52.884 + Mar 7 03:44:52.884: INFO: starting watch + STEP: patching 03/07/23 03:44:52.884 + STEP: updating 03/07/23 03:44:52.888 + Mar 7 03:44:52.891: INFO: waiting for watch events with expected annotations + Mar 7 03:44:52.891: INFO: saw patched and updated annotations + STEP: deleting 03/07/23 03:44:52.891 + STEP: deleting a collection 03/07/23 03:44:52.898 + [AfterEach] [sig-network] IngressClass API + test/e2e/framework/framework.go:187 + Mar 7 03:44:52.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "ingressclass-4765" for this suite. 03/07/23 03:44:52.911 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:52.915 +Mar 7 03:44:52.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename deployment 03/07/23 03:44:52.915 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:52.927 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:52.93 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 +Mar 7 03:44:52.932: INFO: Creating deployment "test-recreate-deployment" +Mar 7 03:44:52.935: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Mar 7 03:44:52.940: INFO: deployment "test-recreate-deployment" doesn't have the required revision set +Mar 7 03:44:54.947: INFO: Waiting deployment "test-recreate-deployment" to complete +Mar 7 03:44:54.950: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Mar 7 03:44:54.975: INFO: Updating deployment test-recreate-deployment +Mar 7 03:44:54.975: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Mar 7 03:44:55.047: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-5932 47e8e4d2-f314-4db6-b8a9-3b87573591be 69023 2 2023-03-07 03:44:52 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-03-07 03:44:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc008066f38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-03-07 03:44:55 +0000 UTC,LastTransitionTime:2023-03-07 03:44:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-9d58999df" is progressing.,LastUpdateTime:2023-03-07 03:44:55 +0000 UTC,LastTransitionTime:2023-03-07 03:44:52 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Mar 7 03:44:55.049: INFO: New ReplicaSet "test-recreate-deployment-9d58999df" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-9d58999df deployment-5932 8315b63c-fa93-40b4-9913-e75e1b273508 69019 1 2023-03-07 03:44:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:9d58999df] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 47e8e4d2-f314-4db6-b8a9-3b87573591be 0xc004000cd0 0xc004000cd1}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e8e4d2-f314-4db6-b8a9-3b87573591be\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 9d58999df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:9d58999df] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004000e28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Mar 7 03:44:55.049: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Mar 7 03:44:55.049: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-7d8b6f647f deployment-5932 b3a1424c-bb3e-4284-847a-37a42dd9dd1d 69010 2 2023-03-07 03:44:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:7d8b6f647f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 47e8e4d2-f314-4db6-b8a9-3b87573591be 0xc000ee9f67 0xc000ee9f68}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:44:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e8e4d2-f314-4db6-b8a9-3b87573591be\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7d8b6f647f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:7d8b6f647f] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004000018 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Mar 7 03:44:55.051: INFO: Pod "test-recreate-deployment-9d58999df-f9rb6" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-9d58999df-f9rb6 test-recreate-deployment-9d58999df- deployment-5932 549c0085-c393-4f11-b9a6-e7f7914586ae 69021 0 2023-03-07 03:44:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:9d58999df] map[] [{apps/v1 ReplicaSet test-recreate-deployment-9d58999df 8315b63c-fa93-40b4-9913-e75e1b273508 0xc0080672e0 0xc0080672e1}] [] [{kube-controller-manager Update v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8315b63c-fa93-40b4-9913-e75e1b273508\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xx9wj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xx9wj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:44:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +Mar 7 03:44:55.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-5932" for this suite. 03/07/23 03:44:55.055 +{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","completed":249,"skipped":4403,"failed":0} +------------------------------ +• [2.145 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:52.915 + Mar 7 03:44:52.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename deployment 03/07/23 03:44:52.915 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:52.927 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:52.93 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 + Mar 7 03:44:52.932: INFO: Creating deployment "test-recreate-deployment" + Mar 7 03:44:52.935: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 + Mar 7 03:44:52.940: INFO: deployment "test-recreate-deployment" doesn't have the required revision set + Mar 7 03:44:54.947: INFO: Waiting deployment "test-recreate-deployment" to complete + Mar 7 03:44:54.950: INFO: Triggering a new rollout for deployment "test-recreate-deployment" + Mar 7 03:44:54.975: INFO: Updating deployment test-recreate-deployment + Mar 7 03:44:54.975: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Mar 7 03:44:55.047: INFO: Deployment "test-recreate-deployment": + &Deployment{ObjectMeta:{test-recreate-deployment deployment-5932 47e8e4d2-f314-4db6-b8a9-3b87573591be 69023 2 2023-03-07 03:44:52 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-03-07 03:44:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc008066f38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-03-07 03:44:55 +0000 UTC,LastTransitionTime:2023-03-07 03:44:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-9d58999df" is progressing.,LastUpdateTime:2023-03-07 03:44:55 +0000 UTC,LastTransitionTime:2023-03-07 03:44:52 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + + Mar 7 03:44:55.049: INFO: New ReplicaSet "test-recreate-deployment-9d58999df" of Deployment "test-recreate-deployment": + &ReplicaSet{ObjectMeta:{test-recreate-deployment-9d58999df deployment-5932 8315b63c-fa93-40b4-9913-e75e1b273508 69019 1 2023-03-07 03:44:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:9d58999df] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 47e8e4d2-f314-4db6-b8a9-3b87573591be 0xc004000cd0 0xc004000cd1}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e8e4d2-f314-4db6-b8a9-3b87573591be\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 9d58999df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:9d58999df] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004000e28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Mar 7 03:44:55.049: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": + Mar 7 03:44:55.049: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-7d8b6f647f deployment-5932 b3a1424c-bb3e-4284-847a-37a42dd9dd1d 69010 2 2023-03-07 03:44:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:7d8b6f647f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 47e8e4d2-f314-4db6-b8a9-3b87573591be 0xc000ee9f67 0xc000ee9f68}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:44:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47e8e4d2-f314-4db6-b8a9-3b87573591be\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7d8b6f647f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:7d8b6f647f] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004000018 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Mar 7 03:44:55.051: INFO: Pod "test-recreate-deployment-9d58999df-f9rb6" is not available: + &Pod{ObjectMeta:{test-recreate-deployment-9d58999df-f9rb6 test-recreate-deployment-9d58999df- deployment-5932 549c0085-c393-4f11-b9a6-e7f7914586ae 69021 0 2023-03-07 03:44:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:9d58999df] map[] [{apps/v1 ReplicaSet test-recreate-deployment-9d58999df 8315b63c-fa93-40b4-9913-e75e1b273508 0xc0080672e0 0xc0080672e1}] [] [{kube-controller-manager Update v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8315b63c-fa93-40b4-9913-e75e1b273508\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:44:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xx9wj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xx9wj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:44:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:44:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 + Mar 7 03:44:55.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "deployment-5932" for this suite. 03/07/23 03:44:55.055 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:333 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:44:55.06 +Mar 7 03:44:55.060: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename init-container 03/07/23 03:44:55.061 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:55.075 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:55.078 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:333 +STEP: creating the pod 03/07/23 03:44:55.08 +Mar 7 03:44:55.080: INFO: PodSpec: initContainers in spec.initContainers +Mar 7 03:45:40.028: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-944fbbcd-42ae-45be-921c-9a3639ee57ed", GenerateName:"", Namespace:"init-container-2493", SelfLink:"", UID:"8f40652c-ef43-4dbc-8f6c-3716884e944c", ResourceVersion:"69276", Generation:0, CreationTimestamp:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"80227835"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"231bc1c122dfdbfa0f9111aa4eff08414b41f72d15f2268658ab0a469d93af4d", "cni.projectcalico.org/podIP":"10.233.247.5/32", "cni.projectcalico.org/podIPs":"10.233.247.5/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003f2a9f0), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003f2aa38), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 7, 3, 45, 40, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003f2aa68), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-4lgws", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc006e02020), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-4lgws", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-4lgws", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.8", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-4lgws", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00221c1d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node-2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a1f500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00221c530)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00221c550)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00221c558), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00221c55c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00439eac0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.1.102", PodIP:"10.233.247.5", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.233.247.5"}}, StartTime:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a1f5e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a1f650)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", ImageID:"registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://e52ac6389deef8c8413791cc6078154396c274280a5284dbea8367b52c4ee6e9", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc006e020a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc006e02080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.8", ImageID:"", ContainerID:"", Started:(*bool)(0xc00221c6cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 +Mar 7 03:45:40.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-2493" for this suite. 03/07/23 03:45:40.036 +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","completed":250,"skipped":4423,"failed":0} +------------------------------ +• [SLOW TEST] [44.981 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:333 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:44:55.06 + Mar 7 03:44:55.060: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename init-container 03/07/23 03:44:55.061 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:44:55.075 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:44:55.078 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:164 + [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:333 + STEP: creating the pod 03/07/23 03:44:55.08 + Mar 7 03:44:55.080: INFO: PodSpec: initContainers in spec.initContainers + Mar 7 03:45:40.028: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-944fbbcd-42ae-45be-921c-9a3639ee57ed", GenerateName:"", Namespace:"init-container-2493", SelfLink:"", UID:"8f40652c-ef43-4dbc-8f6c-3716884e944c", ResourceVersion:"69276", Generation:0, CreationTimestamp:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"80227835"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"231bc1c122dfdbfa0f9111aa4eff08414b41f72d15f2268658ab0a469d93af4d", "cni.projectcalico.org/podIP":"10.233.247.5/32", "cni.projectcalico.org/podIPs":"10.233.247.5/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003f2a9f0), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003f2aa38), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 7, 3, 45, 40, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003f2aa68), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-4lgws", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc006e02020), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-4lgws", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-4lgws", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.8", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-4lgws", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00221c1d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node-2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a1f500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00221c530)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00221c550)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00221c558), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00221c55c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00439eac0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.1.102", PodIP:"10.233.247.5", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.233.247.5"}}, StartTime:time.Date(2023, time.March, 7, 3, 44, 55, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a1f5e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a1f650)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", ImageID:"registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://e52ac6389deef8c8413791cc6078154396c274280a5284dbea8367b52c4ee6e9", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc006e020a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc006e02080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.8", ImageID:"", ContainerID:"", Started:(*bool)(0xc00221c6cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/framework.go:187 + Mar 7 03:45:40.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "init-container-2493" for this suite. 03/07/23 03:45:40.036 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:296 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:45:40.044 +Mar 7 03:45:40.045: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename var-expansion 03/07/23 03:45:40.046 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:45:40.059 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:45:40.061 +[It] should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:296 +STEP: creating the pod 03/07/23 03:45:40.062 +STEP: waiting for pod running 03/07/23 03:45:40.069 +Mar 7 03:45:40.069: INFO: Waiting up to 2m0s for pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" in namespace "var-expansion-1269" to be "running" +Mar 7 03:45:40.071: INFO: Pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.835975ms +Mar 7 03:45:42.075: INFO: Pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8": Phase="Running", Reason="", readiness=true. Elapsed: 2.005878119s +Mar 7 03:45:42.075: INFO: Pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" satisfied condition "running" +STEP: creating a file in subpath 03/07/23 03:45:42.075 +Mar 7 03:45:42.078: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1269 PodName:var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:45:42.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:45:42.078: INFO: ExecWithOptions: Clientset creation +Mar 7 03:45:42.078: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/var-expansion-1269/pods/var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) +STEP: test for file in mounted path 03/07/23 03:45:42.152 +Mar 7 03:45:42.155: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1269 PodName:var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:45:42.155: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:45:42.156: INFO: ExecWithOptions: Clientset creation +Mar 7 03:45:42.156: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/var-expansion-1269/pods/var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) +STEP: updating the annotation value 03/07/23 03:45:42.218 +Mar 7 03:45:42.732: INFO: Successfully updated pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" +STEP: waiting for annotated pod running 03/07/23 03:45:42.732 +Mar 7 03:45:42.732: INFO: Waiting up to 2m0s for pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" in namespace "var-expansion-1269" to be "running" +Mar 7 03:45:42.735: INFO: Pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8": Phase="Running", Reason="", readiness=true. Elapsed: 3.028302ms +Mar 7 03:45:42.735: INFO: Pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" satisfied condition "running" +STEP: deleting the pod gracefully 03/07/23 03:45:42.735 +Mar 7 03:45:42.735: INFO: Deleting pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" in namespace "var-expansion-1269" +Mar 7 03:45:42.741: INFO: Wait up to 5m0s for pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +Mar 7 03:46:16.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-1269" for this suite. 03/07/23 03:46:16.75 +{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","completed":251,"skipped":4461,"failed":0} +------------------------------ +• [SLOW TEST] [36.710 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:296 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:45:40.044 + Mar 7 03:45:40.045: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename var-expansion 03/07/23 03:45:40.046 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:45:40.059 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:45:40.061 + [It] should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:296 + STEP: creating the pod 03/07/23 03:45:40.062 + STEP: waiting for pod running 03/07/23 03:45:40.069 + Mar 7 03:45:40.069: INFO: Waiting up to 2m0s for pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" in namespace "var-expansion-1269" to be "running" + Mar 7 03:45:40.071: INFO: Pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.835975ms + Mar 7 03:45:42.075: INFO: Pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8": Phase="Running", Reason="", readiness=true. Elapsed: 2.005878119s + Mar 7 03:45:42.075: INFO: Pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" satisfied condition "running" + STEP: creating a file in subpath 03/07/23 03:45:42.075 + Mar 7 03:45:42.078: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1269 PodName:var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:45:42.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:45:42.078: INFO: ExecWithOptions: Clientset creation + Mar 7 03:45:42.078: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/var-expansion-1269/pods/var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) + STEP: test for file in mounted path 03/07/23 03:45:42.152 + Mar 7 03:45:42.155: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1269 PodName:var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:45:42.155: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:45:42.156: INFO: ExecWithOptions: Clientset creation + Mar 7 03:45:42.156: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/var-expansion-1269/pods/var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) + STEP: updating the annotation value 03/07/23 03:45:42.218 + Mar 7 03:45:42.732: INFO: Successfully updated pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" + STEP: waiting for annotated pod running 03/07/23 03:45:42.732 + Mar 7 03:45:42.732: INFO: Waiting up to 2m0s for pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" in namespace "var-expansion-1269" to be "running" + Mar 7 03:45:42.735: INFO: Pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8": Phase="Running", Reason="", readiness=true. Elapsed: 3.028302ms + Mar 7 03:45:42.735: INFO: Pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" satisfied condition "running" + STEP: deleting the pod gracefully 03/07/23 03:45:42.735 + Mar 7 03:45:42.735: INFO: Deleting pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" in namespace "var-expansion-1269" + Mar 7 03:45:42.741: INFO: Wait up to 5m0s for pod "var-expansion-0b7896a0-3dce-450b-b033-0b1c90f43ef8" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 + Mar 7 03:46:16.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "var-expansion-1269" for this suite. 03/07/23 03:46:16.75 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:109 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:16.755 +Mar 7 03:46:16.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replication-controller 03/07/23 03:46:16.756 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:16.77 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:16.772 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 +[It] should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:109 +STEP: creating a ReplicationController 03/07/23 03:46:16.777 +STEP: waiting for RC to be added 03/07/23 03:46:16.781 +STEP: waiting for available Replicas 03/07/23 03:46:16.781 +STEP: patching ReplicationController 03/07/23 03:46:18.155 +STEP: waiting for RC to be modified 03/07/23 03:46:18.162 +STEP: patching ReplicationController status 03/07/23 03:46:18.162 +STEP: waiting for RC to be modified 03/07/23 03:46:18.166 +STEP: waiting for available Replicas 03/07/23 03:46:18.167 +STEP: fetching ReplicationController status 03/07/23 03:46:18.172 +STEP: patching ReplicationController scale 03/07/23 03:46:18.174 +STEP: waiting for RC to be modified 03/07/23 03:46:18.18 +STEP: waiting for ReplicationController's scale to be the max amount 03/07/23 03:46:18.18 +STEP: fetching ReplicationController; ensuring that it's patched 03/07/23 03:46:18.993 +STEP: updating ReplicationController status 03/07/23 03:46:18.996 +STEP: waiting for RC to be modified 03/07/23 03:46:19 +STEP: listing all ReplicationControllers 03/07/23 03:46:19.001 +STEP: checking that ReplicationController has expected values 03/07/23 03:46:19.005 +STEP: deleting ReplicationControllers by collection 03/07/23 03:46:19.005 +STEP: waiting for ReplicationController to have a DELETED watchEvent 03/07/23 03:46:19.011 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 +Mar 7 03:46:19.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-1340" for this suite. 03/07/23 03:46:19.084 +{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","completed":252,"skipped":4464,"failed":0} +------------------------------ +• [2.334 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:109 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:16.755 + Mar 7 03:46:16.755: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replication-controller 03/07/23 03:46:16.756 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:16.77 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:16.772 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 + [It] should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:109 + STEP: creating a ReplicationController 03/07/23 03:46:16.777 + STEP: waiting for RC to be added 03/07/23 03:46:16.781 + STEP: waiting for available Replicas 03/07/23 03:46:16.781 + STEP: patching ReplicationController 03/07/23 03:46:18.155 + STEP: waiting for RC to be modified 03/07/23 03:46:18.162 + STEP: patching ReplicationController status 03/07/23 03:46:18.162 + STEP: waiting for RC to be modified 03/07/23 03:46:18.166 + STEP: waiting for available Replicas 03/07/23 03:46:18.167 + STEP: fetching ReplicationController status 03/07/23 03:46:18.172 + STEP: patching ReplicationController scale 03/07/23 03:46:18.174 + STEP: waiting for RC to be modified 03/07/23 03:46:18.18 + STEP: waiting for ReplicationController's scale to be the max amount 03/07/23 03:46:18.18 + STEP: fetching ReplicationController; ensuring that it's patched 03/07/23 03:46:18.993 + STEP: updating ReplicationController status 03/07/23 03:46:18.996 + STEP: waiting for RC to be modified 03/07/23 03:46:19 + STEP: listing all ReplicationControllers 03/07/23 03:46:19.001 + STEP: checking that ReplicationController has expected values 03/07/23 03:46:19.005 + STEP: deleting ReplicationControllers by collection 03/07/23 03:46:19.005 + STEP: waiting for ReplicationController to have a DELETED watchEvent 03/07/23 03:46:19.011 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 + Mar 7 03:46:19.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replication-controller-1340" for this suite. 03/07/23 03:46:19.084 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:126 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:19.09 +Mar 7 03:46:19.090: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:46:19.091 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:19.105 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:19.107 +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:126 +STEP: Creating a pod to test emptydir 0644 on tmpfs 03/07/23 03:46:19.109 +Mar 7 03:46:19.115: INFO: Waiting up to 5m0s for pod "pod-480f2905-ae15-4eed-80b4-7ca808e76e3a" in namespace "emptydir-8380" to be "Succeeded or Failed" +Mar 7 03:46:19.120: INFO: Pod "pod-480f2905-ae15-4eed-80b4-7ca808e76e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.253039ms +Mar 7 03:46:21.123: INFO: Pod "pod-480f2905-ae15-4eed-80b4-7ca808e76e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008688729s +Mar 7 03:46:23.126: INFO: Pod "pod-480f2905-ae15-4eed-80b4-7ca808e76e3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01171432s +STEP: Saw pod success 03/07/23 03:46:23.126 +Mar 7 03:46:23.126: INFO: Pod "pod-480f2905-ae15-4eed-80b4-7ca808e76e3a" satisfied condition "Succeeded or Failed" +Mar 7 03:46:23.130: INFO: Trying to get logs from node node-2 pod pod-480f2905-ae15-4eed-80b4-7ca808e76e3a container test-container: +STEP: delete the pod 03/07/23 03:46:23.142 +Mar 7 03:46:23.151: INFO: Waiting for pod pod-480f2905-ae15-4eed-80b4-7ca808e76e3a to disappear +Mar 7 03:46:23.153: INFO: Pod pod-480f2905-ae15-4eed-80b4-7ca808e76e3a no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:46:23.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8380" for this suite. 03/07/23 03:46:23.157 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":253,"skipped":4470,"failed":0} +------------------------------ +• [4.072 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:126 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:19.09 + Mar 7 03:46:19.090: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:46:19.091 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:19.105 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:19.107 + [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:126 + STEP: Creating a pod to test emptydir 0644 on tmpfs 03/07/23 03:46:19.109 + Mar 7 03:46:19.115: INFO: Waiting up to 5m0s for pod "pod-480f2905-ae15-4eed-80b4-7ca808e76e3a" in namespace "emptydir-8380" to be "Succeeded or Failed" + Mar 7 03:46:19.120: INFO: Pod "pod-480f2905-ae15-4eed-80b4-7ca808e76e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.253039ms + Mar 7 03:46:21.123: INFO: Pod "pod-480f2905-ae15-4eed-80b4-7ca808e76e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008688729s + Mar 7 03:46:23.126: INFO: Pod "pod-480f2905-ae15-4eed-80b4-7ca808e76e3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01171432s + STEP: Saw pod success 03/07/23 03:46:23.126 + Mar 7 03:46:23.126: INFO: Pod "pod-480f2905-ae15-4eed-80b4-7ca808e76e3a" satisfied condition "Succeeded or Failed" + Mar 7 03:46:23.130: INFO: Trying to get logs from node node-2 pod pod-480f2905-ae15-4eed-80b4-7ca808e76e3a container test-container: + STEP: delete the pod 03/07/23 03:46:23.142 + Mar 7 03:46:23.151: INFO: Waiting for pod pod-480f2905-ae15-4eed-80b4-7ca808e76e3a to disappear + Mar 7 03:46:23.153: INFO: Pod pod-480f2905-ae15-4eed-80b4-7ca808e76e3a no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:46:23.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-8380" for this suite. 03/07/23 03:46:23.157 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:254 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:23.165 +Mar 7 03:46:23.165: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename job 03/07/23 03:46:23.166 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:23.177 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:23.181 +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:254 +STEP: Creating a job 03/07/23 03:46:23.182 +STEP: Ensuring job reaches completions 03/07/23 03:46:23.187 +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +Mar 7 03:46:33.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-9899" for this suite. 03/07/23 03:46:33.194 +{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","completed":254,"skipped":4529,"failed":0} +------------------------------ +• [SLOW TEST] [10.034 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:254 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:23.165 + Mar 7 03:46:23.165: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename job 03/07/23 03:46:23.166 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:23.177 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:23.181 + [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:254 + STEP: Creating a job 03/07/23 03:46:23.182 + STEP: Ensuring job reaches completions 03/07/23 03:46:23.187 + [AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 + Mar 7 03:46:33.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "job-9899" for this suite. 03/07/23 03:46:33.194 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:203 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:33.199 +Mar 7 03:46:33.200: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 03:46:33.201 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:33.212 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:33.214 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 +[It] should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:203 +STEP: creating pod 03/07/23 03:46:33.216 +Mar 7 03:46:33.222: INFO: Waiting up to 5m0s for pod "pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca" in namespace "pods-4139" to be "running and ready" +Mar 7 03:46:33.225: INFO: Pod "pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477897ms +Mar 7 03:46:33.225: INFO: The phase of Pod pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:46:35.228: INFO: Pod "pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca": Phase="Running", Reason="", readiness=true. Elapsed: 2.00553687s +Mar 7 03:46:35.228: INFO: The phase of Pod pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca is Running (Ready = true) +Mar 7 03:46:35.228: INFO: Pod "pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca" satisfied condition "running and ready" +Mar 7 03:46:35.233: INFO: Pod pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca has hostIP: 192.168.1.102 +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +Mar 7 03:46:35.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4139" for this suite. 03/07/23 03:46:35.24 +{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","completed":255,"skipped":4531,"failed":0} +------------------------------ +• [2.045 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:203 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:33.199 + Mar 7 03:46:33.200: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 03:46:33.201 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:33.212 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:33.214 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 + [It] should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:203 + STEP: creating pod 03/07/23 03:46:33.216 + Mar 7 03:46:33.222: INFO: Waiting up to 5m0s for pod "pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca" in namespace "pods-4139" to be "running and ready" + Mar 7 03:46:33.225: INFO: Pod "pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477897ms + Mar 7 03:46:33.225: INFO: The phase of Pod pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:46:35.228: INFO: Pod "pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca": Phase="Running", Reason="", readiness=true. Elapsed: 2.00553687s + Mar 7 03:46:35.228: INFO: The phase of Pod pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca is Running (Ready = true) + Mar 7 03:46:35.228: INFO: Pod "pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca" satisfied condition "running and ready" + Mar 7 03:46:35.233: INFO: Pod pod-hostip-80751c0f-c576-45db-a5a7-21f1556dceca has hostIP: 192.168.1.102 + [AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 + Mar 7 03:46:35.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-4139" for this suite. 03/07/23 03:46:35.24 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] Pods + should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1082 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:35.245 +Mar 7 03:46:35.245: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 03:46:35.246 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:35.265 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:35.267 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 +[It] should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1082 +STEP: Create a pod 03/07/23 03:46:35.269 +Mar 7 03:46:35.276: INFO: Waiting up to 5m0s for pod "pod-7tpfq" in namespace "pods-7355" to be "running" +Mar 7 03:46:35.278: INFO: Pod "pod-7tpfq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337953ms +Mar 7 03:46:37.282: INFO: Pod "pod-7tpfq": Phase="Running", Reason="", readiness=true. Elapsed: 2.005624525s +Mar 7 03:46:37.282: INFO: Pod "pod-7tpfq" satisfied condition "running" +STEP: patching /status 03/07/23 03:46:37.282 +Mar 7 03:46:37.288: INFO: Status Message: "Patched by e2e test" and Reason: "E2E" +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +Mar 7 03:46:37.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-7355" for this suite. 03/07/23 03:46:37.292 +{"msg":"PASSED [sig-node] Pods should patch a pod status [Conformance]","completed":256,"skipped":4537,"failed":0} +------------------------------ +• [2.051 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1082 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:35.245 + Mar 7 03:46:35.245: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 03:46:35.246 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:35.265 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:35.267 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 + [It] should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1082 + STEP: Create a pod 03/07/23 03:46:35.269 + Mar 7 03:46:35.276: INFO: Waiting up to 5m0s for pod "pod-7tpfq" in namespace "pods-7355" to be "running" + Mar 7 03:46:35.278: INFO: Pod "pod-7tpfq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337953ms + Mar 7 03:46:37.282: INFO: Pod "pod-7tpfq": Phase="Running", Reason="", readiness=true. Elapsed: 2.005624525s + Mar 7 03:46:37.282: INFO: Pod "pod-7tpfq" satisfied condition "running" + STEP: patching /status 03/07/23 03:46:37.282 + Mar 7 03:46:37.288: INFO: Status Message: "Patched by e2e test" and Reason: "E2E" + [AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 + Mar 7 03:46:37.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-7355" for this suite. 03/07/23 03:46:37.292 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:37.299 +Mar 7 03:46:37.299: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename deployment 03/07/23 03:46:37.301 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:37.314 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:37.316 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 +Mar 7 03:46:37.317: INFO: Creating simple deployment test-new-deployment +Mar 7 03:46:37.326: INFO: new replicaset for deployment "test-new-deployment" is yet to be created +STEP: getting scale subresource 03/07/23 03:46:39.335 +STEP: updating a scale subresource 03/07/23 03:46:39.337 +STEP: verifying the deployment Spec.Replicas was modified 03/07/23 03:46:39.341 +STEP: Patch a scale subresource 03/07/23 03:46:39.343 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Mar 7 03:46:39.359: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-6210 899bf99f-69be-4657-80bb-1cf8c158214d 69861 3 2023-03-07 03:46:37 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-03-07 03:46:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:46:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ccd8c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-03-07 03:46:38 +0000 UTC,LastTransitionTime:2023-03-07 03:46:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-845c8977d9" has successfully progressed.,LastUpdateTime:2023-03-07 03:46:38 +0000 UTC,LastTransitionTime:2023-03-07 03:46:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Mar 7 03:46:39.364: INFO: New ReplicaSet "test-new-deployment-845c8977d9" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-845c8977d9 deployment-6210 9b0a7667-cc10-4f9b-86c5-5ba0da3cf4f7 69866 2 2023-03-07 03:46:37 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 899bf99f-69be-4657-80bb-1cf8c158214d 0xc002ccdd27 0xc002ccdd28}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"899bf99f-69be-4657-80bb-1cf8c158214d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:46:39 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 845c8977d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ccddb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Mar 7 03:46:39.368: INFO: Pod "test-new-deployment-845c8977d9-4jp75" is not available: +&Pod{ObjectMeta:{test-new-deployment-845c8977d9-4jp75 test-new-deployment-845c8977d9- deployment-6210 940ce176-dc63-4f17-8b48-a734de8aeb70 69865 0 2023-03-07 03:46:39 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [{apps/v1 ReplicaSet test-new-deployment-845c8977d9 9b0a7667-cc10-4f9b-86c5-5ba0da3cf4f7 0xc00272e1a7 0xc00272e1a8}] [] [{kube-controller-manager Update v1 2023-03-07 03:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b0a7667-cc10-4f9b-86c5-5ba0da3cf4f7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tvlg4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tvlg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:46:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:46:39.368: INFO: Pod "test-new-deployment-845c8977d9-w22d8" is available: +&Pod{ObjectMeta:{test-new-deployment-845c8977d9-w22d8 test-new-deployment-845c8977d9- deployment-6210 7f46d40a-077b-480a-ba56-cdc26cf136a1 69822 0 2023-03-07 03:46:37 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:6065b11957e8f7f5a2b3cde4b1b76c3be195e9624f5c0120413aa577ae45fc97 cni.projectcalico.org/podIP:10.233.247.12/32 cni.projectcalico.org/podIPs:10.233.247.12/32] [{apps/v1 ReplicaSet test-new-deployment-845c8977d9 9b0a7667-cc10-4f9b-86c5-5ba0da3cf4f7 0xc00272e330 0xc00272e331}] [] [{calico Update v1 2023-03-07 03:46:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-03-07 03:46:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b0a7667-cc10-4f9b-86c5-5ba0da3cf4f7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:46:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gfp5x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gfp5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:46:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:46:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:46:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:46:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.12,StartTime:2023-03-07 03:46:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:46:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://4b448316a8553897c41edc07eb6840b1fee759144ce40e0e67bd6ef65a061b99,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +Mar 7 03:46:39.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-6210" for this suite. 03/07/23 03:46:39.372 +{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","completed":257,"skipped":4543,"failed":0} +------------------------------ +• [2.080 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:37.299 + Mar 7 03:46:37.299: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename deployment 03/07/23 03:46:37.301 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:37.314 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:37.316 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 + Mar 7 03:46:37.317: INFO: Creating simple deployment test-new-deployment + Mar 7 03:46:37.326: INFO: new replicaset for deployment "test-new-deployment" is yet to be created + STEP: getting scale subresource 03/07/23 03:46:39.335 + STEP: updating a scale subresource 03/07/23 03:46:39.337 + STEP: verifying the deployment Spec.Replicas was modified 03/07/23 03:46:39.341 + STEP: Patch a scale subresource 03/07/23 03:46:39.343 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Mar 7 03:46:39.359: INFO: Deployment "test-new-deployment": + &Deployment{ObjectMeta:{test-new-deployment deployment-6210 899bf99f-69be-4657-80bb-1cf8c158214d 69861 3 2023-03-07 03:46:37 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-03-07 03:46:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:46:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ccd8c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-03-07 03:46:38 +0000 UTC,LastTransitionTime:2023-03-07 03:46:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-845c8977d9" has successfully progressed.,LastUpdateTime:2023-03-07 03:46:38 +0000 UTC,LastTransitionTime:2023-03-07 03:46:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Mar 7 03:46:39.364: INFO: New ReplicaSet "test-new-deployment-845c8977d9" of Deployment "test-new-deployment": + &ReplicaSet{ObjectMeta:{test-new-deployment-845c8977d9 deployment-6210 9b0a7667-cc10-4f9b-86c5-5ba0da3cf4f7 69866 2 2023-03-07 03:46:37 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 899bf99f-69be-4657-80bb-1cf8c158214d 0xc002ccdd27 0xc002ccdd28}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"899bf99f-69be-4657-80bb-1cf8c158214d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:46:39 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 845c8977d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ccddb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Mar 7 03:46:39.368: INFO: Pod "test-new-deployment-845c8977d9-4jp75" is not available: + &Pod{ObjectMeta:{test-new-deployment-845c8977d9-4jp75 test-new-deployment-845c8977d9- deployment-6210 940ce176-dc63-4f17-8b48-a734de8aeb70 69865 0 2023-03-07 03:46:39 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [{apps/v1 ReplicaSet test-new-deployment-845c8977d9 9b0a7667-cc10-4f9b-86c5-5ba0da3cf4f7 0xc00272e1a7 0xc00272e1a8}] [] [{kube-controller-manager Update v1 2023-03-07 03:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b0a7667-cc10-4f9b-86c5-5ba0da3cf4f7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tvlg4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tvlg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:46:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:46:39.368: INFO: Pod "test-new-deployment-845c8977d9-w22d8" is available: + &Pod{ObjectMeta:{test-new-deployment-845c8977d9-w22d8 test-new-deployment-845c8977d9- deployment-6210 7f46d40a-077b-480a-ba56-cdc26cf136a1 69822 0 2023-03-07 03:46:37 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:6065b11957e8f7f5a2b3cde4b1b76c3be195e9624f5c0120413aa577ae45fc97 cni.projectcalico.org/podIP:10.233.247.12/32 cni.projectcalico.org/podIPs:10.233.247.12/32] [{apps/v1 ReplicaSet test-new-deployment-845c8977d9 9b0a7667-cc10-4f9b-86c5-5ba0da3cf4f7 0xc00272e330 0xc00272e331}] [] [{calico Update v1 2023-03-07 03:46:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-03-07 03:46:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b0a7667-cc10-4f9b-86c5-5ba0da3cf4f7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:46:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gfp5x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gfp5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:46:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:46:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:46:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:46:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.12,StartTime:2023-03-07 03:46:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:46:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://4b448316a8553897c41edc07eb6840b1fee759144ce40e0e67bd6ef65a061b99,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 + Mar 7 03:46:39.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "deployment-6210" for this suite. 03/07/23 03:46:39.372 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:43 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:39.38 +Mar 7 03:46:39.380: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename var-expansion 03/07/23 03:46:39.38 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:39.397 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:39.4 +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:43 +STEP: Creating a pod to test env composition 03/07/23 03:46:39.401 +Mar 7 03:46:39.407: INFO: Waiting up to 5m0s for pod "var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560" in namespace "var-expansion-4769" to be "Succeeded or Failed" +Mar 7 03:46:39.411: INFO: Pod "var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310327ms +Mar 7 03:46:41.414: INFO: Pod "var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560": Phase="Running", Reason="", readiness=false. Elapsed: 2.006489577s +Mar 7 03:46:43.414: INFO: Pod "var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006829323s +STEP: Saw pod success 03/07/23 03:46:43.414 +Mar 7 03:46:43.414: INFO: Pod "var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560" satisfied condition "Succeeded or Failed" +Mar 7 03:46:43.416: INFO: Trying to get logs from node node-2 pod var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560 container dapi-container: +STEP: delete the pod 03/07/23 03:46:43.422 +Mar 7 03:46:43.432: INFO: Waiting for pod var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560 to disappear +Mar 7 03:46:43.434: INFO: Pod var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560 no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 +Mar 7 03:46:43.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4769" for this suite. 03/07/23 03:46:43.438 +{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","completed":258,"skipped":4568,"failed":0} +------------------------------ +• [4.063 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:43 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:39.38 + Mar 7 03:46:39.380: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename var-expansion 03/07/23 03:46:39.38 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:39.397 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:39.4 + [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:43 + STEP: Creating a pod to test env composition 03/07/23 03:46:39.401 + Mar 7 03:46:39.407: INFO: Waiting up to 5m0s for pod "var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560" in namespace "var-expansion-4769" to be "Succeeded or Failed" + Mar 7 03:46:39.411: INFO: Pod "var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310327ms + Mar 7 03:46:41.414: INFO: Pod "var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560": Phase="Running", Reason="", readiness=false. Elapsed: 2.006489577s + Mar 7 03:46:43.414: INFO: Pod "var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006829323s + STEP: Saw pod success 03/07/23 03:46:43.414 + Mar 7 03:46:43.414: INFO: Pod "var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560" satisfied condition "Succeeded or Failed" + Mar 7 03:46:43.416: INFO: Trying to get logs from node node-2 pod var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560 container dapi-container: + STEP: delete the pod 03/07/23 03:46:43.422 + Mar 7 03:46:43.432: INFO: Waiting for pod var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560 to disappear + Mar 7 03:46:43.434: INFO: Pod var-expansion-e7d9d73f-74f0-4652-b1b1-0df4fea2e560 no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/framework.go:187 + Mar 7 03:46:43.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "var-expansion-4769" for this suite. 03/07/23 03:46:43.438 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:161 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:43.444 +Mar 7 03:46:43.444: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:46:43.445 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:43.464 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:43.466 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:161 +STEP: Creating the pod 03/07/23 03:46:43.468 +Mar 7 03:46:43.475: INFO: Waiting up to 5m0s for pod "annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d" in namespace "downward-api-8446" to be "running and ready" +Mar 7 03:46:43.478: INFO: Pod "annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.430571ms +Mar 7 03:46:43.478: INFO: The phase of Pod annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:46:45.481: INFO: Pod "annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d": Phase="Running", Reason="", readiness=true. Elapsed: 2.005655679s +Mar 7 03:46:45.481: INFO: The phase of Pod annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d is Running (Ready = true) +Mar 7 03:46:45.481: INFO: Pod "annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d" satisfied condition "running and ready" +Mar 7 03:46:45.998: INFO: Successfully updated pod "annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d" +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 03:46:50.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8446" for this suite. 03/07/23 03:46:50.019 +{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","completed":259,"skipped":4581,"failed":0} +------------------------------ +• [SLOW TEST] [6.579 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:161 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:43.444 + Mar 7 03:46:43.444: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:46:43.445 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:43.464 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:43.466 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:161 + STEP: Creating the pod 03/07/23 03:46:43.468 + Mar 7 03:46:43.475: INFO: Waiting up to 5m0s for pod "annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d" in namespace "downward-api-8446" to be "running and ready" + Mar 7 03:46:43.478: INFO: Pod "annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.430571ms + Mar 7 03:46:43.478: INFO: The phase of Pod annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:46:45.481: INFO: Pod "annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d": Phase="Running", Reason="", readiness=true. Elapsed: 2.005655679s + Mar 7 03:46:45.481: INFO: The phase of Pod annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d is Running (Ready = true) + Mar 7 03:46:45.481: INFO: Pod "annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d" satisfied condition "running and ready" + Mar 7 03:46:45.998: INFO: Successfully updated pod "annotationupdatef9daa4a7-e5a6-45ea-a2ab-6ee3a58a225d" + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 03:46:50.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-8446" for this suite. 03/07/23 03:46:50.019 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:106 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:50.024 +Mar 7 03:46:50.024: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:46:50.024 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:50.037 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:50.039 +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:106 +STEP: Creating a pod to test emptydir 0666 on tmpfs 03/07/23 03:46:50.041 +Mar 7 03:46:50.046: INFO: Waiting up to 5m0s for pod "pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db" in namespace "emptydir-7809" to be "Succeeded or Failed" +Mar 7 03:46:50.049: INFO: Pod "pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564194ms +Mar 7 03:46:52.053: INFO: Pod "pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006965281s +Mar 7 03:46:54.053: INFO: Pod "pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006193724s +STEP: Saw pod success 03/07/23 03:46:54.053 +Mar 7 03:46:54.053: INFO: Pod "pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db" satisfied condition "Succeeded or Failed" +Mar 7 03:46:54.055: INFO: Trying to get logs from node node-2 pod pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db container test-container: +STEP: delete the pod 03/07/23 03:46:54.06 +Mar 7 03:46:54.070: INFO: Waiting for pod pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db to disappear +Mar 7 03:46:54.072: INFO: Pod pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:46:54.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7809" for this suite. 03/07/23 03:46:54.075 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":260,"skipped":4596,"failed":0} +------------------------------ +• [4.056 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:106 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:50.024 + Mar 7 03:46:50.024: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:46:50.024 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:50.037 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:50.039 + [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:106 + STEP: Creating a pod to test emptydir 0666 on tmpfs 03/07/23 03:46:50.041 + Mar 7 03:46:50.046: INFO: Waiting up to 5m0s for pod "pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db" in namespace "emptydir-7809" to be "Succeeded or Failed" + Mar 7 03:46:50.049: INFO: Pod "pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564194ms + Mar 7 03:46:52.053: INFO: Pod "pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006965281s + Mar 7 03:46:54.053: INFO: Pod "pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006193724s + STEP: Saw pod success 03/07/23 03:46:54.053 + Mar 7 03:46:54.053: INFO: Pod "pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db" satisfied condition "Succeeded or Failed" + Mar 7 03:46:54.055: INFO: Trying to get logs from node node-2 pod pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db container test-container: + STEP: delete the pod 03/07/23 03:46:54.06 + Mar 7 03:46:54.070: INFO: Waiting for pod pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db to disappear + Mar 7 03:46:54.072: INFO: Pod pod-b19d8dd1-fd0b-4daa-8f58-d0f7f57527db no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:46:54.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-7809" for this suite. 03/07/23 03:46:54.075 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:52 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:54.08 +Mar 7 03:46:54.080: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:46:54.081 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:54.096 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:54.098 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:52 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:46:54.1 +Mar 7 03:46:54.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff" in namespace "projected-2592" to be "Succeeded or Failed" +Mar 7 03:46:54.112: INFO: Pod "downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.944668ms +Mar 7 03:46:56.115: INFO: Pod "downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006233036s +Mar 7 03:46:58.115: INFO: Pod "downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006146103s +STEP: Saw pod success 03/07/23 03:46:58.115 +Mar 7 03:46:58.116: INFO: Pod "downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff" satisfied condition "Succeeded or Failed" +Mar 7 03:46:58.118: INFO: Trying to get logs from node node-2 pod downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff container client-container: +STEP: delete the pod 03/07/23 03:46:58.123 +Mar 7 03:46:58.143: INFO: Waiting for pod downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff to disappear +Mar 7 03:46:58.169: INFO: Pod downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 03:46:58.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2592" for this suite. 03/07/23 03:46:58.181 +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","completed":261,"skipped":4605,"failed":0} +------------------------------ +• [4.126 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:52 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:54.08 + Mar 7 03:46:54.080: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:46:54.081 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:54.096 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:54.098 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:52 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:46:54.1 + Mar 7 03:46:54.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff" in namespace "projected-2592" to be "Succeeded or Failed" + Mar 7 03:46:54.112: INFO: Pod "downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.944668ms + Mar 7 03:46:56.115: INFO: Pod "downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006233036s + Mar 7 03:46:58.115: INFO: Pod "downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006146103s + STEP: Saw pod success 03/07/23 03:46:58.115 + Mar 7 03:46:58.116: INFO: Pod "downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff" satisfied condition "Succeeded or Failed" + Mar 7 03:46:58.118: INFO: Trying to get logs from node node-2 pod downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff container client-container: + STEP: delete the pod 03/07/23 03:46:58.123 + Mar 7 03:46:58.143: INFO: Waiting for pod downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff to disappear + Mar 7 03:46:58.169: INFO: Pod downwardapi-volume-9938ba8b-9648-4e8e-9b0c-2dd0c58d59ff no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 03:46:58.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-2592" for this suite. 03/07/23 03:46:58.181 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:67 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:46:58.206 +Mar 7 03:46:58.206: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:46:58.207 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:58.23 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:58.232 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:67 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:46:58.234 +Mar 7 03:46:58.240: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4" in namespace "downward-api-4950" to be "Succeeded or Failed" +Mar 7 03:46:58.242: INFO: Pod "downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4": Phase="Pending", Reason="", readiness=false. Elapsed: 1.855006ms +Mar 7 03:47:00.246: INFO: Pod "downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005794812s +Mar 7 03:47:02.246: INFO: Pod "downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005965227s +STEP: Saw pod success 03/07/23 03:47:02.246 +Mar 7 03:47:02.246: INFO: Pod "downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4" satisfied condition "Succeeded or Failed" +Mar 7 03:47:02.251: INFO: Trying to get logs from node node-2 pod downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4 container client-container: +STEP: delete the pod 03/07/23 03:47:02.255 +Mar 7 03:47:02.262: INFO: Waiting for pod downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4 to disappear +Mar 7 03:47:02.265: INFO: Pod downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 03:47:02.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4950" for this suite. 03/07/23 03:47:02.268 +{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","completed":262,"skipped":4606,"failed":0} +------------------------------ +• [4.067 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:46:58.206 + Mar 7 03:46:58.206: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:46:58.207 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:46:58.23 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:46:58.232 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:67 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:46:58.234 + Mar 7 03:46:58.240: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4" in namespace "downward-api-4950" to be "Succeeded or Failed" + Mar 7 03:46:58.242: INFO: Pod "downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4": Phase="Pending", Reason="", readiness=false. Elapsed: 1.855006ms + Mar 7 03:47:00.246: INFO: Pod "downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005794812s + Mar 7 03:47:02.246: INFO: Pod "downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005965227s + STEP: Saw pod success 03/07/23 03:47:02.246 + Mar 7 03:47:02.246: INFO: Pod "downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4" satisfied condition "Succeeded or Failed" + Mar 7 03:47:02.251: INFO: Trying to get logs from node node-2 pod downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4 container client-container: + STEP: delete the pod 03/07/23 03:47:02.255 + Mar 7 03:47:02.262: INFO: Waiting for pod downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4 to disappear + Mar 7 03:47:02.265: INFO: Pod downwardapi-volume-d3c7f1ff-d98e-4b91-ae38-1d6c486f6bd4 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 03:47:02.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-4950" for this suite. 03/07/23 03:47:02.268 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:263 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:47:02.274 +Mar 7 03:47:02.274: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:47:02.274 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:02.287 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:02.289 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:47:02.301 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:47:02.741 +STEP: Deploying the webhook pod 03/07/23 03:47:02.747 +STEP: Wait for the deployment to be ready 03/07/23 03:47:02.755 +Mar 7 03:47:02.761: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 03:47:04.768 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:47:04.801 +Mar 7 03:47:05.801: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:263 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API 03/07/23 03:47:05.804 +STEP: create a pod that should be updated by the webhook 03/07/23 03:47:05.815 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:47:05.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2214" for this suite. 03/07/23 03:47:05.838 +STEP: Destroying namespace "webhook-2214-markers" for this suite. 03/07/23 03:47:05.845 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","completed":263,"skipped":4626,"failed":0} +------------------------------ +• [3.630 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:263 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:47:02.274 + Mar 7 03:47:02.274: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:47:02.274 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:02.287 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:02.289 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:47:02.301 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:47:02.741 + STEP: Deploying the webhook pod 03/07/23 03:47:02.747 + STEP: Wait for the deployment to be ready 03/07/23 03:47:02.755 + Mar 7 03:47:02.761: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 03:47:04.768 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:47:04.801 + Mar 7 03:47:05.801: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:263 + STEP: Registering the mutating pod webhook via the AdmissionRegistration API 03/07/23 03:47:05.804 + STEP: create a pod that should be updated by the webhook 03/07/23 03:47:05.815 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:47:05.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-2214" for this suite. 03/07/23 03:47:05.838 + STEP: Destroying namespace "webhook-2214-markers" for this suite. 03/07/23 03:47:05.845 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 +[BeforeEach] [sig-node] Lease + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:47:05.904 +Mar 7 03:47:05.904: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename lease-test 03/07/23 03:47:05.905 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:05.933 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:05.936 +[It] lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 +[AfterEach] [sig-node] Lease + test/e2e/framework/framework.go:187 +Mar 7 03:47:06.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-8838" for this suite. 03/07/23 03:47:06.016 +{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","completed":264,"skipped":4626,"failed":0} +------------------------------ +• [0.116 seconds] +[sig-node] Lease +test/e2e/common/node/framework.go:23 + lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Lease + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:47:05.904 + Mar 7 03:47:05.904: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename lease-test 03/07/23 03:47:05.905 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:05.933 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:05.936 + [It] lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 + [AfterEach] [sig-node] Lease + test/e2e/framework/framework.go:187 + Mar 7 03:47:06.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "lease-test-8838" for this suite. 03/07/23 03:47:06.016 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:655 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:47:06.021 +Mar 7 03:47:06.021: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:47:06.021 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:06.034 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:06.037 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:47:06.051 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:47:06.372 +STEP: Deploying the webhook pod 03/07/23 03:47:06.377 +STEP: Wait for the deployment to be ready 03/07/23 03:47:06.385 +Mar 7 03:47:06.390: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 03/07/23 03:47:08.398 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:47:08.41 +Mar 7 03:47:09.411: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:655 +STEP: Listing all of the created validation webhooks 03/07/23 03:47:09.449 +STEP: Creating a configMap that should be mutated 03/07/23 03:47:10.52 +STEP: Deleting the collection of validation webhooks 03/07/23 03:47:10.544 +STEP: Creating a configMap that should not be mutated 03/07/23 03:47:10.577 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:47:10.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2832" for this suite. 03/07/23 03:47:10.587 +STEP: Destroying namespace "webhook-2832-markers" for this suite. 03/07/23 03:47:10.592 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","completed":265,"skipped":4656,"failed":0} +------------------------------ +• [4.621 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:655 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:47:06.021 + Mar 7 03:47:06.021: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:47:06.021 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:06.034 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:06.037 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:47:06.051 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:47:06.372 + STEP: Deploying the webhook pod 03/07/23 03:47:06.377 + STEP: Wait for the deployment to be ready 03/07/23 03:47:06.385 + Mar 7 03:47:06.390: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 03/07/23 03:47:08.398 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:47:08.41 + Mar 7 03:47:09.411: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:655 + STEP: Listing all of the created validation webhooks 03/07/23 03:47:09.449 + STEP: Creating a configMap that should be mutated 03/07/23 03:47:10.52 + STEP: Deleting the collection of validation webhooks 03/07/23 03:47:10.544 + STEP: Creating a configMap that should not be mutated 03/07/23 03:47:10.577 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:47:10.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-2832" for this suite. 03/07/23 03:47:10.587 + STEP: Destroying namespace "webhook-2832-markers" for this suite. 03/07/23 03:47:10.592 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:47:10.642 +Mar 7 03:47:10.642: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename deployment 03/07/23 03:47:10.643 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:10.685 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:10.69 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 +Mar 7 03:47:10.692: INFO: Creating deployment "webserver-deployment" +Mar 7 03:47:10.699: INFO: Waiting for observed generation 1 +Mar 7 03:47:12.706: INFO: Waiting for all required pods to come up +Mar 7 03:47:12.709: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running 03/07/23 03:47:12.709 +Mar 7 03:47:12.709: INFO: Waiting up to 5m0s for pod "webserver-deployment-845c8977d9-mgr2q" in namespace "deployment-9187" to be "running" +Mar 7 03:47:12.709: INFO: Waiting up to 5m0s for pod "webserver-deployment-845c8977d9-mh7qk" in namespace "deployment-9187" to be "running" +Mar 7 03:47:12.711: INFO: Pod "webserver-deployment-845c8977d9-mh7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315914ms +Mar 7 03:47:12.711: INFO: Pod "webserver-deployment-845c8977d9-mgr2q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409976ms +Mar 7 03:47:14.715: INFO: Pod "webserver-deployment-845c8977d9-mh7qk": Phase="Running", Reason="", readiness=true. Elapsed: 2.005897821s +Mar 7 03:47:14.715: INFO: Pod "webserver-deployment-845c8977d9-mh7qk" satisfied condition "running" +Mar 7 03:47:14.715: INFO: Pod "webserver-deployment-845c8977d9-mgr2q": Phase="Running", Reason="", readiness=true. Elapsed: 2.006182846s +Mar 7 03:47:14.715: INFO: Pod "webserver-deployment-845c8977d9-mgr2q" satisfied condition "running" +Mar 7 03:47:14.715: INFO: Waiting for deployment "webserver-deployment" to complete +Mar 7 03:47:14.719: INFO: Updating deployment "webserver-deployment" with a non-existent image +Mar 7 03:47:14.725: INFO: Updating deployment webserver-deployment +Mar 7 03:47:14.725: INFO: Waiting for observed generation 2 +Mar 7 03:47:16.730: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Mar 7 03:47:16.733: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Mar 7 03:47:16.734: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Mar 7 03:47:16.740: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Mar 7 03:47:16.740: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Mar 7 03:47:16.742: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Mar 7 03:47:16.745: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Mar 7 03:47:16.745: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Mar 7 03:47:16.751: INFO: Updating deployment webserver-deployment +Mar 7 03:47:16.751: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Mar 7 03:47:16.755: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Mar 7 03:47:18.761: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Mar 7 03:47:18.782: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-9187 1f45928b-b49e-4834-a59a-7f0df16ce52c 70742 3 2023-03-07 03:47:10 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001627228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-03-07 03:47:16 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-69b7448995" is progressing.,LastUpdateTime:2023-03-07 03:47:16 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Mar 7 03:47:18.787: INFO: New ReplicaSet "webserver-deployment-69b7448995" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-69b7448995 deployment-9187 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 70738 3 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1f45928b-b49e-4834-a59a-7f0df16ce52c 0xc005547337 0xc005547338}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f45928b-b49e-4834-a59a-7f0df16ce52c\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 69b7448995,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055473e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Mar 7 03:47:18.787: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Mar 7 03:47:18.787: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-845c8977d9 deployment-9187 6094d276-df30-4138-a8b3-20de740802e0 70739 3 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1f45928b-b49e-4834-a59a-7f0df16ce52c 0xc005547447 0xc005547448}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f45928b-b49e-4834-a59a-7f0df16ce52c\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 845c8977d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055474d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-58vct" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-58vct webserver-deployment-69b7448995- deployment-9187 9cd40c98-ba23-4450-9c62-eec9acd30d75 70815 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:b973f48c406fe42c1a89ef4859c53940c6fec3ec342c394d00aa8978d8f0773f cni.projectcalico.org/podIP:10.233.132.117/32 cni.projectcalico.org/podIPs:10.233.132.117/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc001627627 0xc001627628}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p4wxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4wxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-6qq4j" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-6qq4j webserver-deployment-69b7448995- deployment-9187 17bc193b-b1b5-4ade-ba6f-c14640c888be 70656 0 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:c20e183fef615b078b0bd78a51f8bbb3ab94cb59aee015fd0ae9eaaac7836703 cni.projectcalico.org/podIP:10.233.247.34/32 cni.projectcalico.org/podIPs:10.233.247.34/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc001627857 0xc001627858}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zx7zz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zx7zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.34,StartTime:2023-03-07 03:47:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-6zjqz" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-6zjqz webserver-deployment-69b7448995- deployment-9187 31b7d2be-0932-4eb2-89c2-a051633a99a3 70798 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:0c4a17d61ea2d3bab070b2a14a677a8e9cd4014b3410eb7070956271a8dbbcf4 cni.projectcalico.org/podIP:10.233.247.20/32 cni.projectcalico.org/podIPs:10.233.247.20/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc001627ab7 0xc001627ab8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5625z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5625z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-86chb" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-86chb webserver-deployment-69b7448995- deployment-9187 59911ca8-e540-4cd2-8eaa-52936ddb5a26 70619 0 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:fa61e5ee2aca2722d0d51722f923db386f5dad7260df8b774a7118292bfc96bf cni.projectcalico.org/podIP:10.233.247.4/32 cni.projectcalico.org/podIPs:10.233.247.4/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc001627c70 0xc001627c71}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jrrzk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrrzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:47:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-clzgk" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-clzgk webserver-deployment-69b7448995- deployment-9187 ea19bbbb-f039-4939-aaeb-b82c8975d1af 70760 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc001627e77 0xc001627e78}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-25s7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25s7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-dhxc9" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-dhxc9 webserver-deployment-69b7448995- deployment-9187 59881398-1110-4662-b017-025fea8383c5 70786 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:c28f702cad7cce15da2ba2b34b8606647c357f28c529c8d817e12b8a4e7384cf cni.projectcalico.org/podIP:10.233.132.119/32 cni.projectcalico.org/podIPs:10.233.132.119/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586067 0xc002586068}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ppdrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppdrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-jkqv8" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-jkqv8 webserver-deployment-69b7448995- deployment-9187 97c8436b-3e22-41d4-8799-3ec089be0e7f 70851 0 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:007f3eaef5f4dfc342b6c210b42c5da165f5229906460b363276830bd918b0ab cni.projectcalico.org/podIP:10.233.84.153/32 cni.projectcalico.org/podIPs:10.233.84.153/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586297 0xc002586298}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.153\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t7njv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t7njv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.153,StartTime:2023-03-07 03:47:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.153,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-kp24k" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-kp24k webserver-deployment-69b7448995- deployment-9187 fe2664db-c598-480d-bc7c-e285d058f30a 70867 0 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:77052ac09d2dacad309799d95a71949ab54a853339f8826b322dddcaf112b7e8 cni.projectcalico.org/podIP:10.233.132.120/32 cni.projectcalico.org/podIPs:10.233.132.120/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc0025864e7 0xc0025864e8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.132.120\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9r5wn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9r5wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:10.233.132.120,StartTime:2023-03-07 03:47:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.132.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-ltbbl" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-ltbbl webserver-deployment-69b7448995- deployment-9187 8230f37f-36f6-47bb-b740-4c7372b0a582 70823 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:210eaa041f1ff2741b09a2ff9db7fd8472c04f5888704dc8177b8ab21c9c4a8a cni.projectcalico.org/podIP:10.233.247.16/32 cni.projectcalico.org/podIPs:10.233.247.16/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586747 0xc002586748}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dtkzd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dtkzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-p9796" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-p9796 webserver-deployment-69b7448995- deployment-9187 ac9d8c91-7b08-41ad-98a2-85eb9c5aa02d 70824 0 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:ed3e4e16a72e36c56071ef67f7b5de856ce2eb87e4a1601703c7c209d657fb7a cni.projectcalico.org/podIP:10.233.84.152/32 cni.projectcalico.org/podIPs:10.233.84.152/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586910 0xc002586911}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.152\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-85rfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85rfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.152,StartTime:2023-03-07 03:47:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-pkg66" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-pkg66 webserver-deployment-69b7448995- deployment-9187 09d26dd9-fa97-4dc8-a45c-3cd879d12dc7 70794 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:73c6937ca7ab6a52a30b54becbfd86e1a282ea646c304669894752a846b63af5 cni.projectcalico.org/podIP:10.233.84.157/32 cni.projectcalico.org/podIPs:10.233.84.157/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586b67 0xc002586b68}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zlmrm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zlmrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-x4lv2" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-x4lv2 webserver-deployment-69b7448995- deployment-9187 ebc97550-5669-4101-8013-63a050bcb531 70836 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:876acb9b1bbf994f3be6b385391d7b9e0b9023acb350538bf1519d0ba3158f83 cni.projectcalico.org/podIP:10.233.247.29/32 cni.projectcalico.org/podIPs:10.233.247.29/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586da7 0xc002586da8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zj8rw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zj8rw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.800: INFO: Pod "webserver-deployment-69b7448995-zlc25" is not available: +&Pod{ObjectMeta:{webserver-deployment-69b7448995-zlc25 webserver-deployment-69b7448995- deployment-9187 5ed3b644-452b-4ba4-a6ca-e0fc02e8ce27 70835 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:bf2aa5135dddad526e16d61712c5c27f6b521534e88c02b81d5cb03f291a6da7 cni.projectcalico.org/podIP:10.233.84.156/32 cni.projectcalico.org/podIPs:10.233.84.156/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc0025871f0 0xc0025871f1}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m6t52,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6t52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.800: INFO: Pod "webserver-deployment-845c8977d9-2brcn" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-2brcn webserver-deployment-845c8977d9- deployment-9187 de9cc60e-512e-47c1-b7d1-c1f5e844bf0e 70744 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0025873f7 0xc0025873f8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sz7sc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sz7sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.800: INFO: Pod "webserver-deployment-845c8977d9-47fwp" is available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-47fwp webserver-deployment-845c8977d9- deployment-9187 115e5258-261f-4b64-af4d-2a9dba3ebfc5 70526 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:66374d5ba33d0c31eed53985f490e19de3d11caff3ee44c0f4c5de7b7bc1f701 cni.projectcalico.org/podIP:10.233.132.118/32 cni.projectcalico.org/podIPs:10.233.132.118/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0025875c7 0xc0025875c8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.132.118\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8tpnw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8tpnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:10.233.132.118,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://97186cc7a45680f6bcf765a4cae56f66dc8e5a704f5aefb81fb948eec470686e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.132.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.800: INFO: Pod "webserver-deployment-845c8977d9-484rj" is available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-484rj webserver-deployment-845c8977d9- deployment-9187 b62d4ecb-6674-472e-9547-c17c6d56e79f 70510 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:833c195f09ffac148714e80f0f23f6279da80528ed88faee84c595f8e6387dfd cni.projectcalico.org/podIP:10.233.84.150/32 cni.projectcalico.org/podIPs:10.233.84.150/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0025877f7 0xc0025877f8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.150\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j97c8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j97c8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.150,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://e843682dbdb3691dc26cbf0f80d3e7fc12f0f7b5602010a72cf1093046abb778,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.801: INFO: Pod "webserver-deployment-845c8977d9-4p6zx" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-4p6zx webserver-deployment-845c8977d9- deployment-9187 efa43f41-4887-4592-9d48-178cb81189e4 70816 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:257d6887f1061b3bb1d41f21c5ea13b2c48f8e7710a9a7eccf815f5ded2b8ab5 cni.projectcalico.org/podIP:10.233.247.21/32 cni.projectcalico.org/podIPs:10.233.247.21/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc002587a47 0xc002587a48}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dgbqf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgbqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.801: INFO: Pod "webserver-deployment-845c8977d9-57s8m" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-57s8m webserver-deployment-845c8977d9- deployment-9187 d7db76c4-33d0-4dfe-84ec-bdbf549ca8af 70770 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:882258ef6eca48de3e0aff1794e67fb434e02c4b6231273ad0507bebc73ea200 cni.projectcalico.org/podIP:10.233.84.154/32 cni.projectcalico.org/podIPs:10.233.84.154/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc002587c67 0xc002587c68}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v57kw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v57kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-5d6gq" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-5d6gq webserver-deployment-845c8977d9- deployment-9187 1293a6b9-de8b-4e33-b2aa-ced7baa1a8ca 70859 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:4bfa98c23840bc8d43c4b0a1f7206f832ee602417ebdabca5b374ca5864ceff1 cni.projectcalico.org/podIP:10.233.132.123/32 cni.projectcalico.org/podIPs:10.233.132.123/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc002587e87 0xc002587e88}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9n9cb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9n9cb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-5kjwd" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-5kjwd webserver-deployment-845c8977d9- deployment-9187 9d06892b-b825-47b1-bc98-b22b90b63ef0 70809 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:93efb327428a611acea98c88971d5b868a4d06b4ef24421546331791626d5424 cni.projectcalico.org/podIP:10.233.132.121/32 cni.projectcalico.org/podIPs:10.233.132.121/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fe077 0xc0038fe078}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q759v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q759v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-6m5w8" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-6m5w8 webserver-deployment-845c8977d9- deployment-9187 ccd4f20e-60e6-4ebb-9e56-7a4c9a9d7886 70789 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:9f688d0bc77f64d1f989b41824e92beb3be31bc930e5c979ca32e854943063f5 cni.projectcalico.org/podIP:10.233.84.155/32 cni.projectcalico.org/podIPs:10.233.84.155/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fe287 0xc0038fe288}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xncwh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xncwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-6rb2l" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-6rb2l webserver-deployment-845c8977d9- deployment-9187 c346711b-766d-4bde-b7c8-a61174d90235 70845 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:695e7bd1a2fe51d6272f25ed9c9af5be7ebe30984434cbc8eff781085af27f9e cni.projectcalico.org/podIP:10.233.132.122/32 cni.projectcalico.org/podIPs:10.233.132.122/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fe477 0xc0038fe478}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ftm2v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ftm2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-7q287" is available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-7q287 webserver-deployment-845c8977d9- deployment-9187 16f31564-65b0-4a9a-93d2-19b0b83f9917 70522 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:fde17c4e57edbbbd16dd0dfd85f901c952fd7fd1f4378515c7e518c183be8f0c cni.projectcalico.org/podIP:10.233.247.48/32 cni.projectcalico.org/podIPs:10.233.247.48/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fe687 0xc0038fe688}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8mj5g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8mj5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.48,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://25f0b41d519d3dfc342f57f8c47757f9722d913635c7c17a2093a4b4ca2a0431,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-9stvp" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-9stvp webserver-deployment-845c8977d9- deployment-9187 edda8304-3cde-48ee-802b-e486104110a6 70718 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fe897 0xc0038fe898}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6kcv9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6kcv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-f72j8" is available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-f72j8 webserver-deployment-845c8977d9- deployment-9187 f897ef7e-9ffb-44ec-933f-514447c235f4 70518 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:1a03cd6d371ab35f624c4502aa700a7408e114721780e14940490451a1840434 cni.projectcalico.org/podIP:10.233.132.115/32 cni.projectcalico.org/podIPs:10.233.132.115/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fea00 0xc0038fea01}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.132.115\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7gknj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7gknj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:10.233.132.115,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://b14e1bb3c7c94ce53ee85f9b98d2c3056e206094d14c2ee9f9f8e9b5127ae3f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.132.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-mh7qk" is available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-mh7qk webserver-deployment-845c8977d9- deployment-9187 e3098bd8-7621-44e2-90bf-2243a7f23992 70538 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:0a92ea4e8d9a5a90644775bc1495c919f963c7671c5612f4904aa79cdc81d46b cni.projectcalico.org/podIP:10.233.247.9/32 cni.projectcalico.org/podIPs:10.233.247.9/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fec27 0xc0038fec28}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-f4vmz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4vmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.9,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://4bcd900d1afdb3eba41e426f3d2333154956651f4965143da3bdf4ce6a5a4194,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-mv4d9" is available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-mv4d9 webserver-deployment-845c8977d9- deployment-9187 6b20e96e-8180-4ce7-88ed-3b7e01aaf05b 70521 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:c4255ac6acfdc086f8394d4bdb66525dedbda62a99eb3948f36db677178861e2 cni.projectcalico.org/podIP:10.233.132.112/32 cni.projectcalico.org/podIPs:10.233.132.112/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fee47 0xc0038fee48}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.132.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xcc2f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcc2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:10.233.132.112,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://2241c91a9315bab7c1863f199d7a64ab7e0e109c54e6f19592431498c6ae44f5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.132.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-n965x" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-n965x webserver-deployment-845c8977d9- deployment-9187 3f22b304-31cc-449b-a9c0-4f79c3be72a7 70855 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:e2afd21effb856ed467f7c5ad585648c379f330347ee3ed2cbbfcd61c2379a10 cni.projectcalico.org/podIP:10.233.247.13/32 cni.projectcalico.org/podIPs:10.233.247.13/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ff077 0xc0038ff078}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p4nw9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4nw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-qc56s" is available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-qc56s webserver-deployment-845c8977d9- deployment-9187 32f5c1d7-9423-46df-bb61-548b2efaa593 70512 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:4ea1b35bb5869261250780c0a4a175a11813eed3e186511677b9132d955c6069 cni.projectcalico.org/podIP:10.233.84.142/32 cni.projectcalico.org/podIPs:10.233.84.142/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ff2a7 0xc0038ff2a8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.142\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-55f6f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-55f6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.142,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0329465089502351bce4573288e527dc806e6b94de3828d7b064458843f94984,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.142,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-v8lw5" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-v8lw5 webserver-deployment-845c8977d9- deployment-9187 ebbe9b8c-358a-467c-84ea-5563924d85b2 70808 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:c7ac8d353d5fce5189909109186ebccae914abdaf9fc43618d9cfee109f9ed4d cni.projectcalico.org/podIP:10.233.84.158/32 cni.projectcalico.org/podIPs:10.233.84.158/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ff4d7 0xc0038ff4d8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nrfgq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nrfgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-wl9k4" is available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-wl9k4 webserver-deployment-845c8977d9- deployment-9187 60886344-b31a-4301-9329-84b41c8a333e 70507 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:94d131e78d9b135c759b20a2e8d6c3e64ca3922c11fd0c4677ec8b6a170958d3 cni.projectcalico.org/podIP:10.233.84.151/32 cni.projectcalico.org/podIPs:10.233.84.151/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ff6e7 0xc0038ff6e8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.151\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lqx4n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lqx4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.151,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://7cbd919d7e1af368dccbf97b253a323a48aab5fcb6aa5b1fc492ffbee2a66c83,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.151,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.804: INFO: Pod "webserver-deployment-845c8977d9-zchmf" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-zchmf webserver-deployment-845c8977d9- deployment-9187 7bb98568-d378-4319-b1d3-ccca28dbba19 70711 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ff8f7 0xc0038ff8f8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cz4wb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cz4wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Mar 7 03:47:18.804: INFO: Pod "webserver-deployment-845c8977d9-znfcc" is not available: +&Pod{ObjectMeta:{webserver-deployment-845c8977d9-znfcc webserver-deployment-845c8977d9- deployment-9187 241a8173-0aba-4208-a868-4efa2d447cb4 70737 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ffa60 0xc0038ffa61}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v5hvm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5hvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +Mar 7 03:47:18.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9187" for this suite. 03/07/23 03:47:18.817 +{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","completed":266,"skipped":4660,"failed":0} +------------------------------ +• [SLOW TEST] [8.190 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:47:10.642 + Mar 7 03:47:10.642: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename deployment 03/07/23 03:47:10.643 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:10.685 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:10.69 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 + Mar 7 03:47:10.692: INFO: Creating deployment "webserver-deployment" + Mar 7 03:47:10.699: INFO: Waiting for observed generation 1 + Mar 7 03:47:12.706: INFO: Waiting for all required pods to come up + Mar 7 03:47:12.709: INFO: Pod name httpd: Found 10 pods out of 10 + STEP: ensuring each pod is running 03/07/23 03:47:12.709 + Mar 7 03:47:12.709: INFO: Waiting up to 5m0s for pod "webserver-deployment-845c8977d9-mgr2q" in namespace "deployment-9187" to be "running" + Mar 7 03:47:12.709: INFO: Waiting up to 5m0s for pod "webserver-deployment-845c8977d9-mh7qk" in namespace "deployment-9187" to be "running" + Mar 7 03:47:12.711: INFO: Pod "webserver-deployment-845c8977d9-mh7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315914ms + Mar 7 03:47:12.711: INFO: Pod "webserver-deployment-845c8977d9-mgr2q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409976ms + Mar 7 03:47:14.715: INFO: Pod "webserver-deployment-845c8977d9-mh7qk": Phase="Running", Reason="", readiness=true. Elapsed: 2.005897821s + Mar 7 03:47:14.715: INFO: Pod "webserver-deployment-845c8977d9-mh7qk" satisfied condition "running" + Mar 7 03:47:14.715: INFO: Pod "webserver-deployment-845c8977d9-mgr2q": Phase="Running", Reason="", readiness=true. Elapsed: 2.006182846s + Mar 7 03:47:14.715: INFO: Pod "webserver-deployment-845c8977d9-mgr2q" satisfied condition "running" + Mar 7 03:47:14.715: INFO: Waiting for deployment "webserver-deployment" to complete + Mar 7 03:47:14.719: INFO: Updating deployment "webserver-deployment" with a non-existent image + Mar 7 03:47:14.725: INFO: Updating deployment webserver-deployment + Mar 7 03:47:14.725: INFO: Waiting for observed generation 2 + Mar 7 03:47:16.730: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 + Mar 7 03:47:16.733: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 + Mar 7 03:47:16.734: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas + Mar 7 03:47:16.740: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 + Mar 7 03:47:16.740: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 + Mar 7 03:47:16.742: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas + Mar 7 03:47:16.745: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas + Mar 7 03:47:16.745: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 + Mar 7 03:47:16.751: INFO: Updating deployment webserver-deployment + Mar 7 03:47:16.751: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas + Mar 7 03:47:16.755: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 + Mar 7 03:47:18.761: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Mar 7 03:47:18.782: INFO: Deployment "webserver-deployment": + &Deployment{ObjectMeta:{webserver-deployment deployment-9187 1f45928b-b49e-4834-a59a-7f0df16ce52c 70742 3 2023-03-07 03:47:10 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001627228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-03-07 03:47:16 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-69b7448995" is progressing.,LastUpdateTime:2023-03-07 03:47:16 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + + Mar 7 03:47:18.787: INFO: New ReplicaSet "webserver-deployment-69b7448995" of Deployment "webserver-deployment": + &ReplicaSet{ObjectMeta:{webserver-deployment-69b7448995 deployment-9187 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 70738 3 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1f45928b-b49e-4834-a59a-7f0df16ce52c 0xc005547337 0xc005547338}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f45928b-b49e-4834-a59a-7f0df16ce52c\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 69b7448995,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055473e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Mar 7 03:47:18.787: INFO: All old ReplicaSets of Deployment "webserver-deployment": + Mar 7 03:47:18.787: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-845c8977d9 deployment-9187 6094d276-df30-4138-a8b3-20de740802e0 70739 3 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1f45928b-b49e-4834-a59a-7f0df16ce52c 0xc005547447 0xc005547448}] [] [{kube-controller-manager Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f45928b-b49e-4834-a59a-7f0df16ce52c\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 845c8977d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055474d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} + Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-58vct" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-58vct webserver-deployment-69b7448995- deployment-9187 9cd40c98-ba23-4450-9c62-eec9acd30d75 70815 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:b973f48c406fe42c1a89ef4859c53940c6fec3ec342c394d00aa8978d8f0773f cni.projectcalico.org/podIP:10.233.132.117/32 cni.projectcalico.org/podIPs:10.233.132.117/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc001627627 0xc001627628}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p4wxf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4wxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-6qq4j" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-6qq4j webserver-deployment-69b7448995- deployment-9187 17bc193b-b1b5-4ade-ba6f-c14640c888be 70656 0 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:c20e183fef615b078b0bd78a51f8bbb3ab94cb59aee015fd0ae9eaaac7836703 cni.projectcalico.org/podIP:10.233.247.34/32 cni.projectcalico.org/podIPs:10.233.247.34/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc001627857 0xc001627858}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zx7zz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zx7zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.34,StartTime:2023-03-07 03:47:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-6zjqz" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-6zjqz webserver-deployment-69b7448995- deployment-9187 31b7d2be-0932-4eb2-89c2-a051633a99a3 70798 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:0c4a17d61ea2d3bab070b2a14a677a8e9cd4014b3410eb7070956271a8dbbcf4 cni.projectcalico.org/podIP:10.233.247.20/32 cni.projectcalico.org/podIPs:10.233.247.20/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc001627ab7 0xc001627ab8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5625z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5625z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-86chb" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-86chb webserver-deployment-69b7448995- deployment-9187 59911ca8-e540-4cd2-8eaa-52936ddb5a26 70619 0 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:fa61e5ee2aca2722d0d51722f923db386f5dad7260df8b774a7118292bfc96bf cni.projectcalico.org/podIP:10.233.247.4/32 cni.projectcalico.org/podIPs:10.233.247.4/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc001627c70 0xc001627c71}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jrrzk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrrzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:47:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-clzgk" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-clzgk webserver-deployment-69b7448995- deployment-9187 ea19bbbb-f039-4939-aaeb-b82c8975d1af 70760 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc001627e77 0xc001627e78}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-25s7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25s7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.798: INFO: Pod "webserver-deployment-69b7448995-dhxc9" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-dhxc9 webserver-deployment-69b7448995- deployment-9187 59881398-1110-4662-b017-025fea8383c5 70786 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:c28f702cad7cce15da2ba2b34b8606647c357f28c529c8d817e12b8a4e7384cf cni.projectcalico.org/podIP:10.233.132.119/32 cni.projectcalico.org/podIPs:10.233.132.119/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586067 0xc002586068}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ppdrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppdrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-jkqv8" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-jkqv8 webserver-deployment-69b7448995- deployment-9187 97c8436b-3e22-41d4-8799-3ec089be0e7f 70851 0 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:007f3eaef5f4dfc342b6c210b42c5da165f5229906460b363276830bd918b0ab cni.projectcalico.org/podIP:10.233.84.153/32 cni.projectcalico.org/podIPs:10.233.84.153/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586297 0xc002586298}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.153\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t7njv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t7njv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.153,StartTime:2023-03-07 03:47:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.153,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-kp24k" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-kp24k webserver-deployment-69b7448995- deployment-9187 fe2664db-c598-480d-bc7c-e285d058f30a 70867 0 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:77052ac09d2dacad309799d95a71949ab54a853339f8826b322dddcaf112b7e8 cni.projectcalico.org/podIP:10.233.132.120/32 cni.projectcalico.org/podIPs:10.233.132.120/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc0025864e7 0xc0025864e8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.132.120\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9r5wn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9r5wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:10.233.132.120,StartTime:2023-03-07 03:47:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.132.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-ltbbl" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-ltbbl webserver-deployment-69b7448995- deployment-9187 8230f37f-36f6-47bb-b740-4c7372b0a582 70823 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:210eaa041f1ff2741b09a2ff9db7fd8472c04f5888704dc8177b8ab21c9c4a8a cni.projectcalico.org/podIP:10.233.247.16/32 cni.projectcalico.org/podIPs:10.233.247.16/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586747 0xc002586748}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dtkzd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dtkzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-p9796" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-p9796 webserver-deployment-69b7448995- deployment-9187 ac9d8c91-7b08-41ad-98a2-85eb9c5aa02d 70824 0 2023-03-07 03:47:14 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:ed3e4e16a72e36c56071ef67f7b5de856ce2eb87e4a1601703c7c209d657fb7a cni.projectcalico.org/podIP:10.233.84.152/32 cni.projectcalico.org/podIPs:10.233.84.152/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586910 0xc002586911}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.152\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-85rfs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85rfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.152,StartTime:2023-03-07 03:47:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-pkg66" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-pkg66 webserver-deployment-69b7448995- deployment-9187 09d26dd9-fa97-4dc8-a45c-3cd879d12dc7 70794 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:73c6937ca7ab6a52a30b54becbfd86e1a282ea646c304669894752a846b63af5 cni.projectcalico.org/podIP:10.233.84.157/32 cni.projectcalico.org/podIPs:10.233.84.157/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586b67 0xc002586b68}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zlmrm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zlmrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.799: INFO: Pod "webserver-deployment-69b7448995-x4lv2" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-x4lv2 webserver-deployment-69b7448995- deployment-9187 ebc97550-5669-4101-8013-63a050bcb531 70836 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:876acb9b1bbf994f3be6b385391d7b9e0b9023acb350538bf1519d0ba3158f83 cni.projectcalico.org/podIP:10.233.247.29/32 cni.projectcalico.org/podIPs:10.233.247.29/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc002586da7 0xc002586da8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zj8rw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zj8rw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.800: INFO: Pod "webserver-deployment-69b7448995-zlc25" is not available: + &Pod{ObjectMeta:{webserver-deployment-69b7448995-zlc25 webserver-deployment-69b7448995- deployment-9187 5ed3b644-452b-4ba4-a6ca-e0fc02e8ce27 70835 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:69b7448995] map[cni.projectcalico.org/containerID:bf2aa5135dddad526e16d61712c5c27f6b521534e88c02b81d5cb03f291a6da7 cni.projectcalico.org/podIP:10.233.84.156/32 cni.projectcalico.org/podIPs:10.233.84.156/32] [{apps/v1 ReplicaSet webserver-deployment-69b7448995 1ca15a29-5d4c-4a8c-8bd1-818d43263a10 0xc0025871f0 0xc0025871f1}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ca15a29-5d4c-4a8c-8bd1-818d43263a10\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m6t52,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6t52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.800: INFO: Pod "webserver-deployment-845c8977d9-2brcn" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-2brcn webserver-deployment-845c8977d9- deployment-9187 de9cc60e-512e-47c1-b7d1-c1f5e844bf0e 70744 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0025873f7 0xc0025873f8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sz7sc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sz7sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.800: INFO: Pod "webserver-deployment-845c8977d9-47fwp" is available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-47fwp webserver-deployment-845c8977d9- deployment-9187 115e5258-261f-4b64-af4d-2a9dba3ebfc5 70526 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:66374d5ba33d0c31eed53985f490e19de3d11caff3ee44c0f4c5de7b7bc1f701 cni.projectcalico.org/podIP:10.233.132.118/32 cni.projectcalico.org/podIPs:10.233.132.118/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0025875c7 0xc0025875c8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.132.118\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8tpnw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8tpnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:10.233.132.118,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://97186cc7a45680f6bcf765a4cae56f66dc8e5a704f5aefb81fb948eec470686e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.132.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.800: INFO: Pod "webserver-deployment-845c8977d9-484rj" is available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-484rj webserver-deployment-845c8977d9- deployment-9187 b62d4ecb-6674-472e-9547-c17c6d56e79f 70510 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:833c195f09ffac148714e80f0f23f6279da80528ed88faee84c595f8e6387dfd cni.projectcalico.org/podIP:10.233.84.150/32 cni.projectcalico.org/podIPs:10.233.84.150/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0025877f7 0xc0025877f8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.150\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j97c8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j97c8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.150,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://e843682dbdb3691dc26cbf0f80d3e7fc12f0f7b5602010a72cf1093046abb778,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.801: INFO: Pod "webserver-deployment-845c8977d9-4p6zx" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-4p6zx webserver-deployment-845c8977d9- deployment-9187 efa43f41-4887-4592-9d48-178cb81189e4 70816 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:257d6887f1061b3bb1d41f21c5ea13b2c48f8e7710a9a7eccf815f5ded2b8ab5 cni.projectcalico.org/podIP:10.233.247.21/32 cni.projectcalico.org/podIPs:10.233.247.21/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc002587a47 0xc002587a48}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dgbqf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgbqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.801: INFO: Pod "webserver-deployment-845c8977d9-57s8m" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-57s8m webserver-deployment-845c8977d9- deployment-9187 d7db76c4-33d0-4dfe-84ec-bdbf549ca8af 70770 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:882258ef6eca48de3e0aff1794e67fb434e02c4b6231273ad0507bebc73ea200 cni.projectcalico.org/podIP:10.233.84.154/32 cni.projectcalico.org/podIPs:10.233.84.154/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc002587c67 0xc002587c68}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v57kw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v57kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-5d6gq" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-5d6gq webserver-deployment-845c8977d9- deployment-9187 1293a6b9-de8b-4e33-b2aa-ced7baa1a8ca 70859 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:4bfa98c23840bc8d43c4b0a1f7206f832ee602417ebdabca5b374ca5864ceff1 cni.projectcalico.org/podIP:10.233.132.123/32 cni.projectcalico.org/podIPs:10.233.132.123/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc002587e87 0xc002587e88}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9n9cb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9n9cb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-5kjwd" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-5kjwd webserver-deployment-845c8977d9- deployment-9187 9d06892b-b825-47b1-bc98-b22b90b63ef0 70809 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:93efb327428a611acea98c88971d5b868a4d06b4ef24421546331791626d5424 cni.projectcalico.org/podIP:10.233.132.121/32 cni.projectcalico.org/podIPs:10.233.132.121/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fe077 0xc0038fe078}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q759v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q759v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-6m5w8" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-6m5w8 webserver-deployment-845c8977d9- deployment-9187 ccd4f20e-60e6-4ebb-9e56-7a4c9a9d7886 70789 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:9f688d0bc77f64d1f989b41824e92beb3be31bc930e5c979ca32e854943063f5 cni.projectcalico.org/podIP:10.233.84.155/32 cni.projectcalico.org/podIPs:10.233.84.155/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fe287 0xc0038fe288}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xncwh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xncwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-6rb2l" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-6rb2l webserver-deployment-845c8977d9- deployment-9187 c346711b-766d-4bde-b7c8-a61174d90235 70845 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:695e7bd1a2fe51d6272f25ed9c9af5be7ebe30984434cbc8eff781085af27f9e cni.projectcalico.org/podIP:10.233.132.122/32 cni.projectcalico.org/podIPs:10.233.132.122/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fe477 0xc0038fe478}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ftm2v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ftm2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-7q287" is available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-7q287 webserver-deployment-845c8977d9- deployment-9187 16f31564-65b0-4a9a-93d2-19b0b83f9917 70522 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:fde17c4e57edbbbd16dd0dfd85f901c952fd7fd1f4378515c7e518c183be8f0c cni.projectcalico.org/podIP:10.233.247.48/32 cni.projectcalico.org/podIPs:10.233.247.48/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fe687 0xc0038fe688}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8mj5g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8mj5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.48,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://25f0b41d519d3dfc342f57f8c47757f9722d913635c7c17a2093a4b4ca2a0431,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-9stvp" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-9stvp webserver-deployment-845c8977d9- deployment-9187 edda8304-3cde-48ee-802b-e486104110a6 70718 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fe897 0xc0038fe898}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6kcv9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6kcv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.802: INFO: Pod "webserver-deployment-845c8977d9-f72j8" is available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-f72j8 webserver-deployment-845c8977d9- deployment-9187 f897ef7e-9ffb-44ec-933f-514447c235f4 70518 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:1a03cd6d371ab35f624c4502aa700a7408e114721780e14940490451a1840434 cni.projectcalico.org/podIP:10.233.132.115/32 cni.projectcalico.org/podIPs:10.233.132.115/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fea00 0xc0038fea01}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.132.115\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7gknj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7gknj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:10.233.132.115,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://b14e1bb3c7c94ce53ee85f9b98d2c3056e206094d14c2ee9f9f8e9b5127ae3f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.132.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-mh7qk" is available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-mh7qk webserver-deployment-845c8977d9- deployment-9187 e3098bd8-7621-44e2-90bf-2243a7f23992 70538 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:0a92ea4e8d9a5a90644775bc1495c919f963c7671c5612f4904aa79cdc81d46b cni.projectcalico.org/podIP:10.233.247.9/32 cni.projectcalico.org/podIPs:10.233.247.9/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fec27 0xc0038fec28}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-f4vmz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4vmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.9,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://4bcd900d1afdb3eba41e426f3d2333154956651f4965143da3bdf4ce6a5a4194,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-mv4d9" is available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-mv4d9 webserver-deployment-845c8977d9- deployment-9187 6b20e96e-8180-4ce7-88ed-3b7e01aaf05b 70521 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:c4255ac6acfdc086f8394d4bdb66525dedbda62a99eb3948f36db677178861e2 cni.projectcalico.org/podIP:10.233.132.112/32 cni.projectcalico.org/podIPs:10.233.132.112/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038fee47 0xc0038fee48}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.132.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xcc2f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcc2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.100,PodIP:10.233.132.112,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://2241c91a9315bab7c1863f199d7a64ab7e0e109c54e6f19592431498c6ae44f5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.132.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-n965x" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-n965x webserver-deployment-845c8977d9- deployment-9187 3f22b304-31cc-449b-a9c0-4f79c3be72a7 70855 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:e2afd21effb856ed467f7c5ad585648c379f330347ee3ed2cbbfcd61c2379a10 cni.projectcalico.org/podIP:10.233.247.13/32 cni.projectcalico.org/podIPs:10.233.247.13/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ff077 0xc0038ff078}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p4nw9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4nw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-qc56s" is available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-qc56s webserver-deployment-845c8977d9- deployment-9187 32f5c1d7-9423-46df-bb61-548b2efaa593 70512 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:4ea1b35bb5869261250780c0a4a175a11813eed3e186511677b9132d955c6069 cni.projectcalico.org/podIP:10.233.84.142/32 cni.projectcalico.org/podIPs:10.233.84.142/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ff2a7 0xc0038ff2a8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.142\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-55f6f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-55f6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.142,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0329465089502351bce4573288e527dc806e6b94de3828d7b064458843f94984,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.142,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-v8lw5" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-v8lw5 webserver-deployment-845c8977d9- deployment-9187 ebbe9b8c-358a-467c-84ea-5563924d85b2 70808 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:c7ac8d353d5fce5189909109186ebccae914abdaf9fc43618d9cfee109f9ed4d cni.projectcalico.org/podIP:10.233.84.158/32 cni.projectcalico.org/podIPs:10.233.84.158/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ff4d7 0xc0038ff4d8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {calico Update v1 2023-03-07 03:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nrfgq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nrfgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.803: INFO: Pod "webserver-deployment-845c8977d9-wl9k4" is available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-wl9k4 webserver-deployment-845c8977d9- deployment-9187 60886344-b31a-4301-9329-84b41c8a333e 70507 0 2023-03-07 03:47:10 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[cni.projectcalico.org/containerID:94d131e78d9b135c759b20a2e8d6c3e64ca3922c11fd0c4677ec8b6a170958d3 cni.projectcalico.org/podIP:10.233.84.151/32 cni.projectcalico.org/podIPs:10.233.84.151/32] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ff6e7 0xc0038ff6e8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-03-07 03:47:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-03-07 03:47:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.84.151\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lqx4n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lqx4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.101,PodIP:10.233.84.151,StartTime:2023-03-07 03:47:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 03:47:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://7cbd919d7e1af368dccbf97b253a323a48aab5fcb6aa5b1fc492ffbee2a66c83,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.84.151,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.804: INFO: Pod "webserver-deployment-845c8977d9-zchmf" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-zchmf webserver-deployment-845c8977d9- deployment-9187 7bb98568-d378-4319-b1d3-ccca28dbba19 70711 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ff8f7 0xc0038ff8f8}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cz4wb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cz4wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Mar 7 03:47:18.804: INFO: Pod "webserver-deployment-845c8977d9-znfcc" is not available: + &Pod{ObjectMeta:{webserver-deployment-845c8977d9-znfcc webserver-deployment-845c8977d9- deployment-9187 241a8173-0aba-4208-a868-4efa2d447cb4 70737 0 2023-03-07 03:47:16 +0000 UTC map[name:httpd pod-template-hash:845c8977d9] map[] [{apps/v1 ReplicaSet webserver-deployment-845c8977d9 6094d276-df30-4138-a8b3-20de740802e0 0xc0038ffa60 0xc0038ffa61}] [] [{kube-controller-manager Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6094d276-df30-4138-a8b3-20de740802e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 03:47:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v5hvm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5hvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 03:47:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:,StartTime:2023-03-07 03:47:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 + Mar 7 03:47:18.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "deployment-9187" for this suite. 03/07/23 03:47:18.817 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 +[BeforeEach] [sig-node] PreStop + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:47:18.852 +Mar 7 03:47:18.852: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename prestop 03/07/23 03:47:18.853 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:18.874 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:18.876 +[BeforeEach] [sig-node] PreStop + test/e2e/node/pre_stop.go:159 +[It] should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 +STEP: Creating server pod server in namespace prestop-600 03/07/23 03:47:18.877 +STEP: Waiting for pods to come up. 03/07/23 03:47:18.887 +Mar 7 03:47:18.887: INFO: Waiting up to 5m0s for pod "server" in namespace "prestop-600" to be "running" +Mar 7 03:47:18.893: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 6.309864ms +Mar 7 03:47:20.896: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009159239s +Mar 7 03:47:22.897: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010202376s +Mar 7 03:47:24.916: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028735564s +Mar 7 03:47:26.898: INFO: Pod "server": Phase="Running", Reason="", readiness=true. Elapsed: 8.010929997s +Mar 7 03:47:26.898: INFO: Pod "server" satisfied condition "running" +STEP: Creating tester pod tester in namespace prestop-600 03/07/23 03:47:26.9 +Mar 7 03:47:26.905: INFO: Waiting up to 5m0s for pod "tester" in namespace "prestop-600" to be "running" +Mar 7 03:47:26.910: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 5.509564ms +Mar 7 03:47:28.913: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008215265s +Mar 7 03:47:30.915: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01016594s +Mar 7 03:47:32.914: INFO: Pod "tester": Phase="Running", Reason="", readiness=true. Elapsed: 6.008640229s +Mar 7 03:47:32.914: INFO: Pod "tester" satisfied condition "running" +STEP: Deleting pre-stop pod 03/07/23 03:47:32.914 +Mar 7 03:47:37.926: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod 03/07/23 03:47:37.926 +[AfterEach] [sig-node] PreStop + test/e2e/framework/framework.go:187 +Mar 7 03:47:37.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-600" for this suite. 03/07/23 03:47:37.948 +{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","completed":267,"skipped":4680,"failed":0} +------------------------------ +• [SLOW TEST] [19.100 seconds] +[sig-node] PreStop +test/e2e/node/framework.go:23 + should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PreStop + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:47:18.852 + Mar 7 03:47:18.852: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename prestop 03/07/23 03:47:18.853 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:18.874 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:18.876 + [BeforeEach] [sig-node] PreStop + test/e2e/node/pre_stop.go:159 + [It] should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 + STEP: Creating server pod server in namespace prestop-600 03/07/23 03:47:18.877 + STEP: Waiting for pods to come up. 03/07/23 03:47:18.887 + Mar 7 03:47:18.887: INFO: Waiting up to 5m0s for pod "server" in namespace "prestop-600" to be "running" + Mar 7 03:47:18.893: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 6.309864ms + Mar 7 03:47:20.896: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009159239s + Mar 7 03:47:22.897: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010202376s + Mar 7 03:47:24.916: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028735564s + Mar 7 03:47:26.898: INFO: Pod "server": Phase="Running", Reason="", readiness=true. Elapsed: 8.010929997s + Mar 7 03:47:26.898: INFO: Pod "server" satisfied condition "running" + STEP: Creating tester pod tester in namespace prestop-600 03/07/23 03:47:26.9 + Mar 7 03:47:26.905: INFO: Waiting up to 5m0s for pod "tester" in namespace "prestop-600" to be "running" + Mar 7 03:47:26.910: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 5.509564ms + Mar 7 03:47:28.913: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008215265s + Mar 7 03:47:30.915: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01016594s + Mar 7 03:47:32.914: INFO: Pod "tester": Phase="Running", Reason="", readiness=true. Elapsed: 6.008640229s + Mar 7 03:47:32.914: INFO: Pod "tester" satisfied condition "running" + STEP: Deleting pre-stop pod 03/07/23 03:47:32.914 + Mar 7 03:47:37.926: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true + } + STEP: Deleting the server pod 03/07/23 03:47:37.926 + [AfterEach] [sig-node] PreStop + test/e2e/framework/framework.go:187 + Mar 7 03:47:37.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "prestop-600" for this suite. 03/07/23 03:47:37.948 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:156 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:47:37.963 +Mar 7 03:47:37.963: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:47:37.964 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:37.977 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:37.979 +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:156 +STEP: Creating a pod to test emptydir volume type on node default medium 03/07/23 03:47:37.983 +Mar 7 03:47:37.991: INFO: Waiting up to 5m0s for pod "pod-a9ff469e-2828-478a-9078-72ce5d63c2e3" in namespace "emptydir-7516" to be "Succeeded or Failed" +Mar 7 03:47:37.993: INFO: Pod "pod-a9ff469e-2828-478a-9078-72ce5d63c2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.562177ms +Mar 7 03:47:39.997: INFO: Pod "pod-a9ff469e-2828-478a-9078-72ce5d63c2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005907476s +Mar 7 03:47:41.997: INFO: Pod "pod-a9ff469e-2828-478a-9078-72ce5d63c2e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006061984s +STEP: Saw pod success 03/07/23 03:47:41.997 +Mar 7 03:47:41.997: INFO: Pod "pod-a9ff469e-2828-478a-9078-72ce5d63c2e3" satisfied condition "Succeeded or Failed" +Mar 7 03:47:42.000: INFO: Trying to get logs from node node-2 pod pod-a9ff469e-2828-478a-9078-72ce5d63c2e3 container test-container: +STEP: delete the pod 03/07/23 03:47:42.005 +Mar 7 03:47:42.020: INFO: Waiting for pod pod-a9ff469e-2828-478a-9078-72ce5d63c2e3 to disappear +Mar 7 03:47:42.028: INFO: Pod pod-a9ff469e-2828-478a-9078-72ce5d63c2e3 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:47:42.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7516" for this suite. 03/07/23 03:47:42.031 +{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","completed":268,"skipped":4730,"failed":0} +------------------------------ +• [4.073 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:156 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:47:37.963 + Mar 7 03:47:37.963: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:47:37.964 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:37.977 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:37.979 + [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:156 + STEP: Creating a pod to test emptydir volume type on node default medium 03/07/23 03:47:37.983 + Mar 7 03:47:37.991: INFO: Waiting up to 5m0s for pod "pod-a9ff469e-2828-478a-9078-72ce5d63c2e3" in namespace "emptydir-7516" to be "Succeeded or Failed" + Mar 7 03:47:37.993: INFO: Pod "pod-a9ff469e-2828-478a-9078-72ce5d63c2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.562177ms + Mar 7 03:47:39.997: INFO: Pod "pod-a9ff469e-2828-478a-9078-72ce5d63c2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005907476s + Mar 7 03:47:41.997: INFO: Pod "pod-a9ff469e-2828-478a-9078-72ce5d63c2e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006061984s + STEP: Saw pod success 03/07/23 03:47:41.997 + Mar 7 03:47:41.997: INFO: Pod "pod-a9ff469e-2828-478a-9078-72ce5d63c2e3" satisfied condition "Succeeded or Failed" + Mar 7 03:47:42.000: INFO: Trying to get logs from node node-2 pod pod-a9ff469e-2828-478a-9078-72ce5d63c2e3 container test-container: + STEP: delete the pod 03/07/23 03:47:42.005 + Mar 7 03:47:42.020: INFO: Waiting for pod pod-a9ff469e-2828-478a-9078-72ce5d63c2e3 to disappear + Mar 7 03:47:42.028: INFO: Pod pod-a9ff469e-2828-478a-9078-72ce5d63c2e3 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:47:42.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-7516" for this suite. 03/07/23 03:47:42.031 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:47:42.036 +Mar 7 03:47:42.036: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename subpath 03/07/23 03:47:42.037 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:42.048 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:42.05 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 03/07/23 03:47:42.052 +[It] should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 +STEP: Creating pod pod-subpath-test-configmap-98bj 03/07/23 03:47:42.058 +STEP: Creating a pod to test atomic-volume-subpath 03/07/23 03:47:42.058 +Mar 7 03:47:42.064: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-98bj" in namespace "subpath-5237" to be "Succeeded or Failed" +Mar 7 03:47:42.066: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19257ms +Mar 7 03:47:44.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 2.005363728s +Mar 7 03:47:46.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 4.00625506s +Mar 7 03:47:48.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 6.005587786s +Mar 7 03:47:50.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 8.004805324s +Mar 7 03:47:52.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 10.005509904s +Mar 7 03:47:54.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 12.006023691s +Mar 7 03:47:56.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 14.005598142s +Mar 7 03:47:58.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 16.005201769s +Mar 7 03:48:00.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 18.005364975s +Mar 7 03:48:02.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 20.005311435s +Mar 7 03:48:04.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=false. Elapsed: 22.006423541s +Mar 7 03:48:06.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.005428242s +STEP: Saw pod success 03/07/23 03:48:06.069 +Mar 7 03:48:06.070: INFO: Pod "pod-subpath-test-configmap-98bj" satisfied condition "Succeeded or Failed" +Mar 7 03:48:06.072: INFO: Trying to get logs from node node-2 pod pod-subpath-test-configmap-98bj container test-container-subpath-configmap-98bj: +STEP: delete the pod 03/07/23 03:48:06.077 +Mar 7 03:48:06.085: INFO: Waiting for pod pod-subpath-test-configmap-98bj to disappear +Mar 7 03:48:06.087: INFO: Pod pod-subpath-test-configmap-98bj no longer exists +STEP: Deleting pod pod-subpath-test-configmap-98bj 03/07/23 03:48:06.087 +Mar 7 03:48:06.087: INFO: Deleting pod "pod-subpath-test-configmap-98bj" in namespace "subpath-5237" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 +Mar 7 03:48:06.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-5237" for this suite. 03/07/23 03:48:06.092 +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]","completed":269,"skipped":4735,"failed":0} +------------------------------ +• [SLOW TEST] [24.060 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:47:42.036 + Mar 7 03:47:42.036: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename subpath 03/07/23 03:47:42.037 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:47:42.048 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:47:42.05 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 03/07/23 03:47:42.052 + [It] should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 + STEP: Creating pod pod-subpath-test-configmap-98bj 03/07/23 03:47:42.058 + STEP: Creating a pod to test atomic-volume-subpath 03/07/23 03:47:42.058 + Mar 7 03:47:42.064: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-98bj" in namespace "subpath-5237" to be "Succeeded or Failed" + Mar 7 03:47:42.066: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19257ms + Mar 7 03:47:44.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 2.005363728s + Mar 7 03:47:46.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 4.00625506s + Mar 7 03:47:48.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 6.005587786s + Mar 7 03:47:50.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 8.004805324s + Mar 7 03:47:52.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 10.005509904s + Mar 7 03:47:54.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 12.006023691s + Mar 7 03:47:56.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 14.005598142s + Mar 7 03:47:58.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 16.005201769s + Mar 7 03:48:00.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 18.005364975s + Mar 7 03:48:02.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=true. Elapsed: 20.005311435s + Mar 7 03:48:04.070: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Running", Reason="", readiness=false. Elapsed: 22.006423541s + Mar 7 03:48:06.069: INFO: Pod "pod-subpath-test-configmap-98bj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.005428242s + STEP: Saw pod success 03/07/23 03:48:06.069 + Mar 7 03:48:06.070: INFO: Pod "pod-subpath-test-configmap-98bj" satisfied condition "Succeeded or Failed" + Mar 7 03:48:06.072: INFO: Trying to get logs from node node-2 pod pod-subpath-test-configmap-98bj container test-container-subpath-configmap-98bj: + STEP: delete the pod 03/07/23 03:48:06.077 + Mar 7 03:48:06.085: INFO: Waiting for pod pod-subpath-test-configmap-98bj to disappear + Mar 7 03:48:06.087: INFO: Pod pod-subpath-test-configmap-98bj no longer exists + STEP: Deleting pod pod-subpath-test-configmap-98bj 03/07/23 03:48:06.087 + Mar 7 03:48:06.087: INFO: Deleting pod "pod-subpath-test-configmap-98bj" in namespace "subpath-5237" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/framework.go:187 + Mar 7 03:48:06.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "subpath-5237" for this suite. 03/07/23 03:48:06.092 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 +[BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:48:06.096 +Mar 7 03:48:06.096: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pod-network-test 03/07/23 03:48:06.099 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:06.112 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:06.114 +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 +STEP: Performing setup for networking test in namespace pod-network-test-3155 03/07/23 03:48:06.115 +STEP: creating a selector 03/07/23 03:48:06.115 +STEP: Creating the service pods in kubernetes 03/07/23 03:48:06.116 +Mar 7 03:48:06.116: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Mar 7 03:48:06.136: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-3155" to be "running and ready" +Mar 7 03:48:06.146: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.988832ms +Mar 7 03:48:06.146: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:48:08.150: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.014036228s +Mar 7 03:48:08.150: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:48:10.149: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.013015738s +Mar 7 03:48:10.149: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:48:12.151: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.014895901s +Mar 7 03:48:12.151: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:48:14.150: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.013539893s +Mar 7 03:48:14.150: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:48:16.150: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.013287547s +Mar 7 03:48:16.150: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Mar 7 03:48:18.151: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.014266082s +Mar 7 03:48:18.151: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Mar 7 03:48:18.151: INFO: Pod "netserver-0" satisfied condition "running and ready" +Mar 7 03:48:18.153: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-3155" to be "running and ready" +Mar 7 03:48:18.155: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.052849ms +Mar 7 03:48:18.155: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Mar 7 03:48:18.155: INFO: Pod "netserver-1" satisfied condition "running and ready" +Mar 7 03:48:18.157: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-3155" to be "running and ready" +Mar 7 03:48:18.159: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 1.986448ms +Mar 7 03:48:18.159: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Mar 7 03:48:18.159: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 03/07/23 03:48:18.161 +Mar 7 03:48:18.171: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-3155" to be "running" +Mar 7 03:48:18.174: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.913307ms +Mar 7 03:48:20.177: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.005753128s +Mar 7 03:48:20.177: INFO: Pod "test-container-pod" satisfied condition "running" +Mar 7 03:48:20.180: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-3155" to be "running" +Mar 7 03:48:20.182: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.318356ms +Mar 7 03:48:20.182: INFO: Pod "host-test-container-pod" satisfied condition "running" +Mar 7 03:48:20.184: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Mar 7 03:48:20.184: INFO: Going to poll 10.233.132.126 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Mar 7 03:48:20.187: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.233.132.126 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3155 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:48:20.187: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:48:20.188: INFO: ExecWithOptions: Clientset creation +Mar 7 03:48:20.188: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-3155/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.233.132.126+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Mar 7 03:48:21.268: INFO: Found all 1 expected endpoints: [netserver-0] +Mar 7 03:48:21.268: INFO: Going to poll 10.233.84.159 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Mar 7 03:48:21.275: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.233.84.159 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3155 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:48:21.275: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:48:21.276: INFO: ExecWithOptions: Clientset creation +Mar 7 03:48:21.276: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-3155/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.233.84.159+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Mar 7 03:48:22.327: INFO: Found all 1 expected endpoints: [netserver-1] +Mar 7 03:48:22.327: INFO: Going to poll 10.233.247.47 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Mar 7 03:48:22.330: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.233.247.47 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3155 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:48:22.330: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:48:22.330: INFO: ExecWithOptions: Clientset creation +Mar 7 03:48:22.330: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-3155/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.233.247.47+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Mar 7 03:48:23.390: INFO: Found all 1 expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:187 +Mar 7 03:48:23.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-3155" for this suite. 03/07/23 03:48:23.394 +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","completed":270,"skipped":4739,"failed":0} +------------------------------ +• [SLOW TEST] [17.303 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:48:06.096 + Mar 7 03:48:06.096: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pod-network-test 03/07/23 03:48:06.099 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:06.112 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:06.114 + [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 + STEP: Performing setup for networking test in namespace pod-network-test-3155 03/07/23 03:48:06.115 + STEP: creating a selector 03/07/23 03:48:06.115 + STEP: Creating the service pods in kubernetes 03/07/23 03:48:06.116 + Mar 7 03:48:06.116: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Mar 7 03:48:06.136: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-3155" to be "running and ready" + Mar 7 03:48:06.146: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.988832ms + Mar 7 03:48:06.146: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:48:08.150: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.014036228s + Mar 7 03:48:08.150: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:48:10.149: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.013015738s + Mar 7 03:48:10.149: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:48:12.151: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.014895901s + Mar 7 03:48:12.151: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:48:14.150: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.013539893s + Mar 7 03:48:14.150: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:48:16.150: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.013287547s + Mar 7 03:48:16.150: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Mar 7 03:48:18.151: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.014266082s + Mar 7 03:48:18.151: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Mar 7 03:48:18.151: INFO: Pod "netserver-0" satisfied condition "running and ready" + Mar 7 03:48:18.153: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-3155" to be "running and ready" + Mar 7 03:48:18.155: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.052849ms + Mar 7 03:48:18.155: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Mar 7 03:48:18.155: INFO: Pod "netserver-1" satisfied condition "running and ready" + Mar 7 03:48:18.157: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-3155" to be "running and ready" + Mar 7 03:48:18.159: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 1.986448ms + Mar 7 03:48:18.159: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Mar 7 03:48:18.159: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 03/07/23 03:48:18.161 + Mar 7 03:48:18.171: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-3155" to be "running" + Mar 7 03:48:18.174: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.913307ms + Mar 7 03:48:20.177: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.005753128s + Mar 7 03:48:20.177: INFO: Pod "test-container-pod" satisfied condition "running" + Mar 7 03:48:20.180: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-3155" to be "running" + Mar 7 03:48:20.182: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.318356ms + Mar 7 03:48:20.182: INFO: Pod "host-test-container-pod" satisfied condition "running" + Mar 7 03:48:20.184: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Mar 7 03:48:20.184: INFO: Going to poll 10.233.132.126 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Mar 7 03:48:20.187: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.233.132.126 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3155 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:48:20.187: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:48:20.188: INFO: ExecWithOptions: Clientset creation + Mar 7 03:48:20.188: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-3155/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.233.132.126+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Mar 7 03:48:21.268: INFO: Found all 1 expected endpoints: [netserver-0] + Mar 7 03:48:21.268: INFO: Going to poll 10.233.84.159 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Mar 7 03:48:21.275: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.233.84.159 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3155 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:48:21.275: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:48:21.276: INFO: ExecWithOptions: Clientset creation + Mar 7 03:48:21.276: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-3155/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.233.84.159+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Mar 7 03:48:22.327: INFO: Found all 1 expected endpoints: [netserver-1] + Mar 7 03:48:22.327: INFO: Going to poll 10.233.247.47 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Mar 7 03:48:22.330: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.233.247.47 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3155 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:48:22.330: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:48:22.330: INFO: ExecWithOptions: Clientset creation + Mar 7 03:48:22.330: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-3155/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.233.247.47+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Mar 7 03:48:23.390: INFO: Found all 1 expected endpoints: [netserver-2] + [AfterEach] [sig-network] Networking + test/e2e/framework/framework.go:187 + Mar 7 03:48:23.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pod-network-test-3155" for this suite. 03/07/23 03:48:23.394 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ControllerRevision [Serial] + should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 +[BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:48:23.402 +Mar 7 03:48:23.402: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename controllerrevisions 03/07/23 03:48:23.403 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:23.418 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:23.42 +[BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:93 +[It] should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 +STEP: Creating DaemonSet "e2e-ws7p9-daemon-set" 03/07/23 03:48:23.441 +STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 03:48:23.445 +Mar 7 03:48:23.449: INFO: Number of nodes with available pods controlled by daemonset e2e-ws7p9-daemon-set: 0 +Mar 7 03:48:23.449: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 03:48:24.456: INFO: Number of nodes with available pods controlled by daemonset e2e-ws7p9-daemon-set: 0 +Mar 7 03:48:24.456: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 03:48:25.456: INFO: Number of nodes with available pods controlled by daemonset e2e-ws7p9-daemon-set: 3 +Mar 7 03:48:25.456: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset e2e-ws7p9-daemon-set +STEP: Confirm DaemonSet "e2e-ws7p9-daemon-set" successfully created with "daemonset-name=e2e-ws7p9-daemon-set" label 03/07/23 03:48:25.458 +STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-ws7p9-daemon-set" 03/07/23 03:48:25.465 +Mar 7 03:48:25.469: INFO: Located ControllerRevision: "e2e-ws7p9-daemon-set-656dd7cc7f" +STEP: Patching ControllerRevision "e2e-ws7p9-daemon-set-656dd7cc7f" 03/07/23 03:48:25.471 +Mar 7 03:48:25.476: INFO: e2e-ws7p9-daemon-set-656dd7cc7f has been patched +STEP: Create a new ControllerRevision 03/07/23 03:48:25.476 +Mar 7 03:48:25.480: INFO: Created ControllerRevision: e2e-ws7p9-daemon-set-65b645c7f +STEP: Confirm that there are two ControllerRevisions 03/07/23 03:48:25.48 +Mar 7 03:48:25.480: INFO: Requesting list of ControllerRevisions to confirm quantity +Mar 7 03:48:25.485: INFO: Found 2 ControllerRevisions +STEP: Deleting ControllerRevision "e2e-ws7p9-daemon-set-656dd7cc7f" 03/07/23 03:48:25.485 +STEP: Confirm that there is only one ControllerRevision 03/07/23 03:48:25.489 +Mar 7 03:48:25.489: INFO: Requesting list of ControllerRevisions to confirm quantity +Mar 7 03:48:25.491: INFO: Found 1 ControllerRevisions +STEP: Updating ControllerRevision "e2e-ws7p9-daemon-set-65b645c7f" 03/07/23 03:48:25.492 +Mar 7 03:48:25.498: INFO: e2e-ws7p9-daemon-set-65b645c7f has been updated +STEP: Generate another ControllerRevision by patching the Daemonset 03/07/23 03:48:25.498 +W0307 03:48:25.506934 22 warnings.go:70] unknown field "updateStrategy" +STEP: Confirm that there are two ControllerRevisions 03/07/23 03:48:25.506 +Mar 7 03:48:25.507: INFO: Requesting list of ControllerRevisions to confirm quantity +Mar 7 03:48:26.509: INFO: Requesting list of ControllerRevisions to confirm quantity +Mar 7 03:48:26.512: INFO: Found 2 ControllerRevisions +STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-ws7p9-daemon-set-65b645c7f=updated" 03/07/23 03:48:26.512 +STEP: Confirm that there is only one ControllerRevision 03/07/23 03:48:26.518 +Mar 7 03:48:26.518: INFO: Requesting list of ControllerRevisions to confirm quantity +Mar 7 03:48:26.520: INFO: Found 1 ControllerRevisions +Mar 7 03:48:26.522: INFO: ControllerRevision "e2e-ws7p9-daemon-set-557d4f8854" has revision 3 +[AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:58 +STEP: Deleting DaemonSet "e2e-ws7p9-daemon-set" 03/07/23 03:48:26.524 +STEP: deleting DaemonSet.extensions e2e-ws7p9-daemon-set in namespace controllerrevisions-5141, will wait for the garbage collector to delete the pods 03/07/23 03:48:26.524 +Mar 7 03:48:26.581: INFO: Deleting DaemonSet.extensions e2e-ws7p9-daemon-set took: 4.767721ms +Mar 7 03:48:26.682: INFO: Terminating DaemonSet.extensions e2e-ws7p9-daemon-set pods took: 101.178617ms +Mar 7 03:48:27.887: INFO: Number of nodes with available pods controlled by daemonset e2e-ws7p9-daemon-set: 0 +Mar 7 03:48:27.887: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-ws7p9-daemon-set +Mar 7 03:48:27.890: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"71758"},"items":null} + +Mar 7 03:48:27.895: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"71758"},"items":null} + +[AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:48:27.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "controllerrevisions-5141" for this suite. 03/07/23 03:48:27.91 +{"msg":"PASSED [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]","completed":271,"skipped":4798,"failed":0} +------------------------------ +• [4.512 seconds] +[sig-apps] ControllerRevision [Serial] +test/e2e/apps/framework.go:23 + should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:48:23.402 + Mar 7 03:48:23.402: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename controllerrevisions 03/07/23 03:48:23.403 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:23.418 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:23.42 + [BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:93 + [It] should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 + STEP: Creating DaemonSet "e2e-ws7p9-daemon-set" 03/07/23 03:48:23.441 + STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 03:48:23.445 + Mar 7 03:48:23.449: INFO: Number of nodes with available pods controlled by daemonset e2e-ws7p9-daemon-set: 0 + Mar 7 03:48:23.449: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 03:48:24.456: INFO: Number of nodes with available pods controlled by daemonset e2e-ws7p9-daemon-set: 0 + Mar 7 03:48:24.456: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 03:48:25.456: INFO: Number of nodes with available pods controlled by daemonset e2e-ws7p9-daemon-set: 3 + Mar 7 03:48:25.456: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset e2e-ws7p9-daemon-set + STEP: Confirm DaemonSet "e2e-ws7p9-daemon-set" successfully created with "daemonset-name=e2e-ws7p9-daemon-set" label 03/07/23 03:48:25.458 + STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-ws7p9-daemon-set" 03/07/23 03:48:25.465 + Mar 7 03:48:25.469: INFO: Located ControllerRevision: "e2e-ws7p9-daemon-set-656dd7cc7f" + STEP: Patching ControllerRevision "e2e-ws7p9-daemon-set-656dd7cc7f" 03/07/23 03:48:25.471 + Mar 7 03:48:25.476: INFO: e2e-ws7p9-daemon-set-656dd7cc7f has been patched + STEP: Create a new ControllerRevision 03/07/23 03:48:25.476 + Mar 7 03:48:25.480: INFO: Created ControllerRevision: e2e-ws7p9-daemon-set-65b645c7f + STEP: Confirm that there are two ControllerRevisions 03/07/23 03:48:25.48 + Mar 7 03:48:25.480: INFO: Requesting list of ControllerRevisions to confirm quantity + Mar 7 03:48:25.485: INFO: Found 2 ControllerRevisions + STEP: Deleting ControllerRevision "e2e-ws7p9-daemon-set-656dd7cc7f" 03/07/23 03:48:25.485 + STEP: Confirm that there is only one ControllerRevision 03/07/23 03:48:25.489 + Mar 7 03:48:25.489: INFO: Requesting list of ControllerRevisions to confirm quantity + Mar 7 03:48:25.491: INFO: Found 1 ControllerRevisions + STEP: Updating ControllerRevision "e2e-ws7p9-daemon-set-65b645c7f" 03/07/23 03:48:25.492 + Mar 7 03:48:25.498: INFO: e2e-ws7p9-daemon-set-65b645c7f has been updated + STEP: Generate another ControllerRevision by patching the Daemonset 03/07/23 03:48:25.498 + W0307 03:48:25.506934 22 warnings.go:70] unknown field "updateStrategy" + STEP: Confirm that there are two ControllerRevisions 03/07/23 03:48:25.506 + Mar 7 03:48:25.507: INFO: Requesting list of ControllerRevisions to confirm quantity + Mar 7 03:48:26.509: INFO: Requesting list of ControllerRevisions to confirm quantity + Mar 7 03:48:26.512: INFO: Found 2 ControllerRevisions + STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-ws7p9-daemon-set-65b645c7f=updated" 03/07/23 03:48:26.512 + STEP: Confirm that there is only one ControllerRevision 03/07/23 03:48:26.518 + Mar 7 03:48:26.518: INFO: Requesting list of ControllerRevisions to confirm quantity + Mar 7 03:48:26.520: INFO: Found 1 ControllerRevisions + Mar 7 03:48:26.522: INFO: ControllerRevision "e2e-ws7p9-daemon-set-557d4f8854" has revision 3 + [AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:58 + STEP: Deleting DaemonSet "e2e-ws7p9-daemon-set" 03/07/23 03:48:26.524 + STEP: deleting DaemonSet.extensions e2e-ws7p9-daemon-set in namespace controllerrevisions-5141, will wait for the garbage collector to delete the pods 03/07/23 03:48:26.524 + Mar 7 03:48:26.581: INFO: Deleting DaemonSet.extensions e2e-ws7p9-daemon-set took: 4.767721ms + Mar 7 03:48:26.682: INFO: Terminating DaemonSet.extensions e2e-ws7p9-daemon-set pods took: 101.178617ms + Mar 7 03:48:27.887: INFO: Number of nodes with available pods controlled by daemonset e2e-ws7p9-daemon-set: 0 + Mar 7 03:48:27.887: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-ws7p9-daemon-set + Mar 7 03:48:27.890: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"71758"},"items":null} + + Mar 7 03:48:27.895: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"71758"},"items":null} + + [AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:48:27.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "controllerrevisions-5141" for this suite. 03/07/23 03:48:27.91 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:139 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:48:27.916 +Mar 7 03:48:27.916: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:48:27.917 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:27.93 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:27.932 +[It] should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:139 +STEP: Creating projection with secret that has name secret-emptykey-test-f64ffefc-2d82-4ec5-b780-eae56cfd375a 03/07/23 03:48:27.934 +[AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:48:27.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4556" for this suite. 03/07/23 03:48:27.938 +{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","completed":272,"skipped":4859,"failed":0} +------------------------------ +• [0.027 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:139 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:48:27.916 + Mar 7 03:48:27.916: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:48:27.917 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:27.93 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:27.932 + [It] should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:139 + STEP: Creating projection with secret that has name secret-emptykey-test-f64ffefc-2d82-4ec5-b780-eae56cfd375a 03/07/23 03:48:27.934 + [AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:48:27.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-4556" for this suite. 03/07/23 03:48:27.938 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:97 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:48:27.943 +Mar 7 03:48:27.944: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-lifecycle-hook 03/07/23 03:48:27.944 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:27.959 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:27.961 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 +STEP: create the container to handle the HTTPGet hook request. 03/07/23 03:48:27.966 +Mar 7 03:48:27.972: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-2701" to be "running and ready" +Mar 7 03:48:27.974: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37673ms +Mar 7 03:48:27.974: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:48:29.977: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.005097689s +Mar 7 03:48:29.977: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Mar 7 03:48:29.977: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:97 +STEP: create the pod with lifecycle hook 03/07/23 03:48:29.979 +Mar 7 03:48:29.982: INFO: Waiting up to 5m0s for pod "pod-with-poststart-exec-hook" in namespace "container-lifecycle-hook-2701" to be "running and ready" +Mar 7 03:48:29.985: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142538ms +Mar 7 03:48:29.985: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:48:32.003: INFO: Pod "pod-with-poststart-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.020691753s +Mar 7 03:48:32.003: INFO: The phase of Pod pod-with-poststart-exec-hook is Running (Ready = true) +Mar 7 03:48:32.003: INFO: Pod "pod-with-poststart-exec-hook" satisfied condition "running and ready" +STEP: check poststart hook 03/07/23 03:48:32.025 +STEP: delete the pod with lifecycle hook 03/07/23 03:48:32.071 +Mar 7 03:48:32.080: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Mar 7 03:48:32.083: INFO: Pod pod-with-poststart-exec-hook still exists +Mar 7 03:48:34.086: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Mar 7 03:48:34.089: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 +Mar 7 03:48:34.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-2701" for this suite. 03/07/23 03:48:34.093 +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","completed":273,"skipped":4868,"failed":0} +------------------------------ +• [SLOW TEST] [6.154 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:97 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:48:27.943 + Mar 7 03:48:27.944: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-lifecycle-hook 03/07/23 03:48:27.944 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:27.959 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:27.961 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 + STEP: create the container to handle the HTTPGet hook request. 03/07/23 03:48:27.966 + Mar 7 03:48:27.972: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-2701" to be "running and ready" + Mar 7 03:48:27.974: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37673ms + Mar 7 03:48:27.974: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:48:29.977: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.005097689s + Mar 7 03:48:29.977: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Mar 7 03:48:29.977: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:97 + STEP: create the pod with lifecycle hook 03/07/23 03:48:29.979 + Mar 7 03:48:29.982: INFO: Waiting up to 5m0s for pod "pod-with-poststart-exec-hook" in namespace "container-lifecycle-hook-2701" to be "running and ready" + Mar 7 03:48:29.985: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142538ms + Mar 7 03:48:29.985: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:48:32.003: INFO: Pod "pod-with-poststart-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.020691753s + Mar 7 03:48:32.003: INFO: The phase of Pod pod-with-poststart-exec-hook is Running (Ready = true) + Mar 7 03:48:32.003: INFO: Pod "pod-with-poststart-exec-hook" satisfied condition "running and ready" + STEP: check poststart hook 03/07/23 03:48:32.025 + STEP: delete the pod with lifecycle hook 03/07/23 03:48:32.071 + Mar 7 03:48:32.080: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear + Mar 7 03:48:32.083: INFO: Pod pod-with-poststart-exec-hook still exists + Mar 7 03:48:34.086: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear + Mar 7 03:48:34.089: INFO: Pod pod-with-poststart-exec-hook no longer exists + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 + Mar 7 03:48:34.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-lifecycle-hook-2701" for this suite. 03/07/23 03:48:34.093 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:98 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:48:34.098 +Mar 7 03:48:34.098: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:48:34.099 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:34.111 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:34.114 +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:98 +STEP: Creating configMap with name configmap-test-volume-map-611514ec-43fd-47cf-8319-f25a69c8f033 03/07/23 03:48:34.116 +STEP: Creating a pod to test consume configMaps 03/07/23 03:48:34.12 +Mar 7 03:48:34.127: INFO: Waiting up to 5m0s for pod "pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698" in namespace "configmap-9715" to be "Succeeded or Failed" +Mar 7 03:48:34.130: INFO: Pod "pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.796293ms +Mar 7 03:48:36.134: INFO: Pod "pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006944759s +Mar 7 03:48:38.134: INFO: Pod "pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006600517s +STEP: Saw pod success 03/07/23 03:48:38.134 +Mar 7 03:48:38.134: INFO: Pod "pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698" satisfied condition "Succeeded or Failed" +Mar 7 03:48:38.136: INFO: Trying to get logs from node node-2 pod pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698 container agnhost-container: +STEP: delete the pod 03/07/23 03:48:38.141 +Mar 7 03:48:38.150: INFO: Waiting for pod pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698 to disappear +Mar 7 03:48:38.152: INFO: Pod pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:48:38.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9715" for this suite. 03/07/23 03:48:38.155 +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","completed":274,"skipped":4886,"failed":0} +------------------------------ +• [4.062 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:98 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:48:34.098 + Mar 7 03:48:34.098: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:48:34.099 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:34.111 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:34.114 + [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:98 + STEP: Creating configMap with name configmap-test-volume-map-611514ec-43fd-47cf-8319-f25a69c8f033 03/07/23 03:48:34.116 + STEP: Creating a pod to test consume configMaps 03/07/23 03:48:34.12 + Mar 7 03:48:34.127: INFO: Waiting up to 5m0s for pod "pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698" in namespace "configmap-9715" to be "Succeeded or Failed" + Mar 7 03:48:34.130: INFO: Pod "pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.796293ms + Mar 7 03:48:36.134: INFO: Pod "pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006944759s + Mar 7 03:48:38.134: INFO: Pod "pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006600517s + STEP: Saw pod success 03/07/23 03:48:38.134 + Mar 7 03:48:38.134: INFO: Pod "pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698" satisfied condition "Succeeded or Failed" + Mar 7 03:48:38.136: INFO: Trying to get logs from node node-2 pod pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698 container agnhost-container: + STEP: delete the pod 03/07/23 03:48:38.141 + Mar 7 03:48:38.150: INFO: Waiting for pod pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698 to disappear + Mar 7 03:48:38.152: INFO: Pod pod-configmaps-af85e2d3-6474-4cea-b4e0-71c259102698 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:48:38.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-9715" for this suite. 03/07/23 03:48:38.155 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:97 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:48:38.161 +Mar 7 03:48:38.162: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename security-context 03/07/23 03:48:38.163 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:38.177 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:38.178 +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:97 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 03/07/23 03:48:38.18 +Mar 7 03:48:38.186: INFO: Waiting up to 5m0s for pod "security-context-1648f766-c04f-4905-8f99-1d30ee9564c1" in namespace "security-context-9064" to be "Succeeded or Failed" +Mar 7 03:48:38.189: INFO: Pod "security-context-1648f766-c04f-4905-8f99-1d30ee9564c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.855816ms +Mar 7 03:48:40.192: INFO: Pod "security-context-1648f766-c04f-4905-8f99-1d30ee9564c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006347654s +Mar 7 03:48:42.193: INFO: Pod "security-context-1648f766-c04f-4905-8f99-1d30ee9564c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006587891s +STEP: Saw pod success 03/07/23 03:48:42.193 +Mar 7 03:48:42.193: INFO: Pod "security-context-1648f766-c04f-4905-8f99-1d30ee9564c1" satisfied condition "Succeeded or Failed" +Mar 7 03:48:42.196: INFO: Trying to get logs from node node-2 pod security-context-1648f766-c04f-4905-8f99-1d30ee9564c1 container test-container: +STEP: delete the pod 03/07/23 03:48:42.201 +Mar 7 03:48:42.208: INFO: Waiting for pod security-context-1648f766-c04f-4905-8f99-1d30ee9564c1 to disappear +Mar 7 03:48:42.210: INFO: Pod security-context-1648f766-c04f-4905-8f99-1d30ee9564c1 no longer exists +[AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 +Mar 7 03:48:42.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-9064" for this suite. 03/07/23 03:48:42.214 +{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","completed":275,"skipped":4904,"failed":0} +------------------------------ +• [4.057 seconds] +[sig-node] Security Context +test/e2e/node/framework.go:23 + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:97 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:48:38.161 + Mar 7 03:48:38.162: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename security-context 03/07/23 03:48:38.163 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:38.177 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:38.178 + [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:97 + STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 03/07/23 03:48:38.18 + Mar 7 03:48:38.186: INFO: Waiting up to 5m0s for pod "security-context-1648f766-c04f-4905-8f99-1d30ee9564c1" in namespace "security-context-9064" to be "Succeeded or Failed" + Mar 7 03:48:38.189: INFO: Pod "security-context-1648f766-c04f-4905-8f99-1d30ee9564c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.855816ms + Mar 7 03:48:40.192: INFO: Pod "security-context-1648f766-c04f-4905-8f99-1d30ee9564c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006347654s + Mar 7 03:48:42.193: INFO: Pod "security-context-1648f766-c04f-4905-8f99-1d30ee9564c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006587891s + STEP: Saw pod success 03/07/23 03:48:42.193 + Mar 7 03:48:42.193: INFO: Pod "security-context-1648f766-c04f-4905-8f99-1d30ee9564c1" satisfied condition "Succeeded or Failed" + Mar 7 03:48:42.196: INFO: Trying to get logs from node node-2 pod security-context-1648f766-c04f-4905-8f99-1d30ee9564c1 container test-container: + STEP: delete the pod 03/07/23 03:48:42.201 + Mar 7 03:48:42.208: INFO: Waiting for pod security-context-1648f766-c04f-4905-8f99-1d30ee9564c1 to disappear + Mar 7 03:48:42.210: INFO: Pod security-context-1648f766-c04f-4905-8f99-1d30ee9564c1 no longer exists + [AfterEach] [sig-node] Security Context + test/e2e/framework/framework.go:187 + Mar 7 03:48:42.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "security-context-9064" for this suite. 03/07/23 03:48:42.214 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:68 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:48:42.219 +Mar 7 03:48:42.219: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-probe 03/07/23 03:48:42.22 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:42.232 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:42.234 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:68 +Mar 7 03:48:42.241: INFO: Waiting up to 5m0s for pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901" in namespace "container-probe-9624" to be "running and ready" +Mar 7 03:48:42.244: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467968ms +Mar 7 03:48:42.244: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:48:44.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 2.006469704s +Mar 7 03:48:44.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) +Mar 7 03:48:46.249: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 4.007310511s +Mar 7 03:48:46.249: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) +Mar 7 03:48:48.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 6.006733353s +Mar 7 03:48:48.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) +Mar 7 03:48:50.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 8.006486604s +Mar 7 03:48:50.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) +Mar 7 03:48:52.247: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 10.005288491s +Mar 7 03:48:52.247: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) +Mar 7 03:48:54.247: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 12.005863996s +Mar 7 03:48:54.247: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) +Mar 7 03:48:56.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 14.006173645s +Mar 7 03:48:56.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) +Mar 7 03:48:58.249: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 16.007202511s +Mar 7 03:48:58.249: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) +Mar 7 03:49:00.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 18.006277577s +Mar 7 03:49:00.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) +Mar 7 03:49:02.250: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 20.008553623s +Mar 7 03:49:02.250: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) +Mar 7 03:49:04.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=true. Elapsed: 22.006140323s +Mar 7 03:49:04.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = true) +Mar 7 03:49:04.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901" satisfied condition "running and ready" +Mar 7 03:49:04.250: INFO: Container started at 2023-03-07 03:48:43 +0000 UTC, pod became ready at 2023-03-07 03:49:02 +0000 UTC +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +Mar 7 03:49:04.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9624" for this suite. 03/07/23 03:49:04.253 +{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","completed":276,"skipped":4909,"failed":0} +------------------------------ +• [SLOW TEST] [22.038 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:68 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:48:42.219 + Mar 7 03:48:42.219: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-probe 03/07/23 03:48:42.22 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:48:42.232 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:48:42.234 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 + [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:68 + Mar 7 03:48:42.241: INFO: Waiting up to 5m0s for pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901" in namespace "container-probe-9624" to be "running and ready" + Mar 7 03:48:42.244: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467968ms + Mar 7 03:48:42.244: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:48:44.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 2.006469704s + Mar 7 03:48:44.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) + Mar 7 03:48:46.249: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 4.007310511s + Mar 7 03:48:46.249: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) + Mar 7 03:48:48.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 6.006733353s + Mar 7 03:48:48.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) + Mar 7 03:48:50.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 8.006486604s + Mar 7 03:48:50.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) + Mar 7 03:48:52.247: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 10.005288491s + Mar 7 03:48:52.247: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) + Mar 7 03:48:54.247: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 12.005863996s + Mar 7 03:48:54.247: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) + Mar 7 03:48:56.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 14.006173645s + Mar 7 03:48:56.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) + Mar 7 03:48:58.249: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 16.007202511s + Mar 7 03:48:58.249: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) + Mar 7 03:49:00.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 18.006277577s + Mar 7 03:49:00.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) + Mar 7 03:49:02.250: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=false. Elapsed: 20.008553623s + Mar 7 03:49:02.250: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = false) + Mar 7 03:49:04.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901": Phase="Running", Reason="", readiness=true. Elapsed: 22.006140323s + Mar 7 03:49:04.248: INFO: The phase of Pod test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901 is Running (Ready = true) + Mar 7 03:49:04.248: INFO: Pod "test-webserver-eb3d281b-c23c-4813-aba7-2a334f47a901" satisfied condition "running and ready" + Mar 7 03:49:04.250: INFO: Container started at 2023-03-07 03:48:43 +0000 UTC, pod became ready at 2023-03-07 03:49:02 +0000 UTC + [AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 + Mar 7 03:49:04.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-probe-9624" for this suite. 03/07/23 03:49:04.253 + << End Captured GinkgoWriter Output +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:91 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:04.258 +Mar 7 03:49:04.259: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replication-controller 03/07/23 03:49:04.259 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:04.273 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:04.275 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 +[It] should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:91 +STEP: Given a Pod with a 'name' label pod-adoption is created 03/07/23 03:49:04.276 +Mar 7 03:49:04.282: INFO: Waiting up to 5m0s for pod "pod-adoption" in namespace "replication-controller-7222" to be "running and ready" +Mar 7 03:49:04.286: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 3.727042ms +Mar 7 03:49:04.286: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:49:06.290: INFO: Pod "pod-adoption": Phase="Running", Reason="", readiness=true. Elapsed: 2.007825639s +Mar 7 03:49:06.290: INFO: The phase of Pod pod-adoption is Running (Ready = true) +Mar 7 03:49:06.290: INFO: Pod "pod-adoption" satisfied condition "running and ready" +STEP: When a replication controller with a matching selector is created 03/07/23 03:49:06.292 +STEP: Then the orphan pod is adopted 03/07/23 03:49:06.296 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 +Mar 7 03:49:07.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-7222" for this suite. 03/07/23 03:49:07.306 +{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","completed":277,"skipped":4909,"failed":0} +------------------------------ +• [3.053 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:91 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:04.258 + Mar 7 03:49:04.259: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replication-controller 03/07/23 03:49:04.259 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:04.273 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:04.275 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 + [It] should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:91 + STEP: Given a Pod with a 'name' label pod-adoption is created 03/07/23 03:49:04.276 + Mar 7 03:49:04.282: INFO: Waiting up to 5m0s for pod "pod-adoption" in namespace "replication-controller-7222" to be "running and ready" + Mar 7 03:49:04.286: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 3.727042ms + Mar 7 03:49:04.286: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:49:06.290: INFO: Pod "pod-adoption": Phase="Running", Reason="", readiness=true. Elapsed: 2.007825639s + Mar 7 03:49:06.290: INFO: The phase of Pod pod-adoption is Running (Ready = true) + Mar 7 03:49:06.290: INFO: Pod "pod-adoption" satisfied condition "running and ready" + STEP: When a replication controller with a matching selector is created 03/07/23 03:49:06.292 + STEP: Then the orphan pod is adopted 03/07/23 03:49:06.296 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 + Mar 7 03:49:07.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replication-controller-7222" for this suite. 03/07/23 03:49:07.306 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:739 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:07.311 +Mar 7 03:49:07.311: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename svcaccounts 03/07/23 03:49:07.312 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:07.325 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:07.327 +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:739 +Mar 7 03:49:07.332: INFO: Got root ca configmap in namespace "svcaccounts-7718" +Mar 7 03:49:07.335: INFO: Deleted root ca configmap in namespace "svcaccounts-7718" +STEP: waiting for a new root ca configmap created 03/07/23 03:49:07.836 +Mar 7 03:49:07.838: INFO: Recreated root ca configmap in namespace "svcaccounts-7718" +Mar 7 03:49:07.842: INFO: Updated root ca configmap in namespace "svcaccounts-7718" +STEP: waiting for the root ca configmap reconciled 03/07/23 03:49:08.342 +Mar 7 03:49:08.346: INFO: Reconciled root ca configmap in namespace "svcaccounts-7718" +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 +Mar 7 03:49:08.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-7718" for this suite. 03/07/23 03:49:08.349 +{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","completed":278,"skipped":4919,"failed":0} +------------------------------ +• [1.063 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:739 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:07.311 + Mar 7 03:49:07.311: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename svcaccounts 03/07/23 03:49:07.312 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:07.325 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:07.327 + [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:739 + Mar 7 03:49:07.332: INFO: Got root ca configmap in namespace "svcaccounts-7718" + Mar 7 03:49:07.335: INFO: Deleted root ca configmap in namespace "svcaccounts-7718" + STEP: waiting for a new root ca configmap created 03/07/23 03:49:07.836 + Mar 7 03:49:07.838: INFO: Recreated root ca configmap in namespace "svcaccounts-7718" + Mar 7 03:49:07.842: INFO: Updated root ca configmap in namespace "svcaccounts-7718" + STEP: waiting for the root ca configmap reconciled 03/07/23 03:49:08.342 + Mar 7 03:49:08.346: INFO: Reconciled root ca configmap in namespace "svcaccounts-7718" + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/framework.go:187 + Mar 7 03:49:08.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "svcaccounts-7718" for this suite. 03/07/23 03:49:08.349 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:382 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:08.375 +Mar 7 03:49:08.375: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 03:49:08.376 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:08.39 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:08.393 +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:382 +STEP: Counting existing ResourceQuota 03/07/23 03:49:08.395 +STEP: Creating a ResourceQuota 03/07/23 03:49:13.4 +STEP: Ensuring resource quota status is calculated 03/07/23 03:49:13.413 +STEP: Creating a ReplicationController 03/07/23 03:49:15.416 +STEP: Ensuring resource quota status captures replication controller creation 03/07/23 03:49:15.425 +STEP: Deleting a ReplicationController 03/07/23 03:49:17.428 +STEP: Ensuring resource quota status released usage 03/07/23 03:49:17.433 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 03:49:19.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-5666" for this suite. 03/07/23 03:49:19.439 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","completed":279,"skipped":4930,"failed":0} +------------------------------ +• [SLOW TEST] [11.068 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:382 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:08.375 + Mar 7 03:49:08.375: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 03:49:08.376 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:08.39 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:08.393 + [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:382 + STEP: Counting existing ResourceQuota 03/07/23 03:49:08.395 + STEP: Creating a ResourceQuota 03/07/23 03:49:13.4 + STEP: Ensuring resource quota status is calculated 03/07/23 03:49:13.413 + STEP: Creating a ReplicationController 03/07/23 03:49:15.416 + STEP: Ensuring resource quota status captures replication controller creation 03/07/23 03:49:15.425 + STEP: Deleting a ReplicationController 03/07/23 03:49:17.428 + STEP: Ensuring resource quota status released usage 03/07/23 03:49:17.433 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 03:49:19.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-5666" for this suite. 03/07/23 03:49:19.439 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:346 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:19.445 +Mar 7 03:49:19.445: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename disruption 03/07/23 03:49:19.445 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:19.46 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:19.462 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:346 +STEP: Creating a pdb that targets all three pods in a test replica set 03/07/23 03:49:19.464 +STEP: Waiting for the pdb to be processed 03/07/23 03:49:19.468 +STEP: First trying to evict a pod which shouldn't be evictable 03/07/23 03:49:21.476 +STEP: Waiting for all pods to be running 03/07/23 03:49:21.476 +Mar 7 03:49:21.478: INFO: pods: 0 < 3 +STEP: locating a running pod 03/07/23 03:49:23.481 +STEP: Updating the pdb to allow a pod to be evicted 03/07/23 03:49:23.488 +STEP: Waiting for the pdb to be processed 03/07/23 03:49:23.493 +STEP: Trying to evict the same pod we tried earlier which should now be evictable 03/07/23 03:49:25.499 +STEP: Waiting for all pods to be running 03/07/23 03:49:25.499 +STEP: Waiting for the pdb to observed all healthy pods 03/07/23 03:49:25.501 +STEP: Patching the pdb to disallow a pod to be evicted 03/07/23 03:49:25.515 +STEP: Waiting for the pdb to be processed 03/07/23 03:49:25.54 +STEP: Waiting for all pods to be running 03/07/23 03:49:27.547 +STEP: locating a running pod 03/07/23 03:49:27.55 +STEP: Deleting the pdb to allow a pod to be evicted 03/07/23 03:49:27.557 +STEP: Waiting for the pdb to be deleted 03/07/23 03:49:27.561 +STEP: Trying to evict the same pod we tried earlier which should now be evictable 03/07/23 03:49:27.563 +STEP: Waiting for all pods to be running 03/07/23 03:49:27.563 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 +Mar 7 03:49:27.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-9041" for this suite. 03/07/23 03:49:27.58 +{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","completed":280,"skipped":4959,"failed":0} +------------------------------ +• [SLOW TEST] [8.143 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:346 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:19.445 + Mar 7 03:49:19.445: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename disruption 03/07/23 03:49:19.445 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:19.46 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:19.462 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 + [It] should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:346 + STEP: Creating a pdb that targets all three pods in a test replica set 03/07/23 03:49:19.464 + STEP: Waiting for the pdb to be processed 03/07/23 03:49:19.468 + STEP: First trying to evict a pod which shouldn't be evictable 03/07/23 03:49:21.476 + STEP: Waiting for all pods to be running 03/07/23 03:49:21.476 + Mar 7 03:49:21.478: INFO: pods: 0 < 3 + STEP: locating a running pod 03/07/23 03:49:23.481 + STEP: Updating the pdb to allow a pod to be evicted 03/07/23 03:49:23.488 + STEP: Waiting for the pdb to be processed 03/07/23 03:49:23.493 + STEP: Trying to evict the same pod we tried earlier which should now be evictable 03/07/23 03:49:25.499 + STEP: Waiting for all pods to be running 03/07/23 03:49:25.499 + STEP: Waiting for the pdb to observed all healthy pods 03/07/23 03:49:25.501 + STEP: Patching the pdb to disallow a pod to be evicted 03/07/23 03:49:25.515 + STEP: Waiting for the pdb to be processed 03/07/23 03:49:25.54 + STEP: Waiting for all pods to be running 03/07/23 03:49:27.547 + STEP: locating a running pod 03/07/23 03:49:27.55 + STEP: Deleting the pdb to allow a pod to be evicted 03/07/23 03:49:27.557 + STEP: Waiting for the pdb to be deleted 03/07/23 03:49:27.561 + STEP: Trying to evict the same pod we tried earlier which should now be evictable 03/07/23 03:49:27.563 + STEP: Waiting for all pods to be running 03/07/23 03:49:27.563 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 + Mar 7 03:49:27.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "disruption-9041" for this suite. 03/07/23 03:49:27.58 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:101 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:27.589 +Mar 7 03:49:27.589: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename endpointslice 03/07/23 03:49:27.59 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:27.611 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:27.614 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:101 +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 +Mar 7 03:49:31.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-9682" for this suite. 03/07/23 03:49:31.677 +{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","completed":281,"skipped":4966,"failed":0} +------------------------------ +• [4.093 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:101 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:27.589 + Mar 7 03:49:27.589: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename endpointslice 03/07/23 03:49:27.59 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:27.611 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:27.614 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 + [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:101 + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 + Mar 7 03:49:31.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "endpointslice-9682" for this suite. 03/07/23 03:49:31.677 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:238 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:31.683 +Mar 7 03:49:31.683: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:49:31.683 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:31.695 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:31.697 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:49:31.708 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:49:32.334 +STEP: Deploying the webhook pod 03/07/23 03:49:32.34 +STEP: Wait for the deployment to be ready 03/07/23 03:49:32.35 +Mar 7 03:49:32.359: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 03:49:34.367 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:49:34.413 +Mar 7 03:49:35.414: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:238 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API 03/07/23 03:49:35.416 +STEP: create a namespace for the webhook 03/07/23 03:49:35.427 +STEP: create a configmap should be unconditionally rejected by the webhook 03/07/23 03:49:35.432 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:49:35.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6452" for this suite. 03/07/23 03:49:35.465 +STEP: Destroying namespace "webhook-6452-markers" for this suite. 03/07/23 03:49:35.47 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","completed":282,"skipped":4980,"failed":0} +------------------------------ +• [3.830 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:238 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:31.683 + Mar 7 03:49:31.683: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:49:31.683 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:31.695 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:31.697 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:49:31.708 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:49:32.334 + STEP: Deploying the webhook pod 03/07/23 03:49:32.34 + STEP: Wait for the deployment to be ready 03/07/23 03:49:32.35 + Mar 7 03:49:32.359: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 03:49:34.367 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:49:34.413 + Mar 7 03:49:35.414: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:238 + STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API 03/07/23 03:49:35.416 + STEP: create a namespace for the webhook 03/07/23 03:49:35.427 + STEP: create a configmap should be unconditionally rejected by the webhook 03/07/23 03:49:35.432 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:49:35.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-6452" for this suite. 03/07/23 03:49:35.465 + STEP: Destroying namespace "webhook-6452-markers" for this suite. 03/07/23 03:49:35.47 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:374 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:35.515 +Mar 7 03:49:35.515: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:49:35.516 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:35.536 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:35.54 +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:374 +STEP: Creating configMap with name projected-configmap-test-volume-e893a91c-9216-4a79-9673-9cfabf36ff59 03/07/23 03:49:35.545 +STEP: Creating a pod to test consume configMaps 03/07/23 03:49:35.557 +Mar 7 03:49:35.567: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748" in namespace "projected-475" to be "Succeeded or Failed" +Mar 7 03:49:35.572: INFO: Pod "pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748": Phase="Pending", Reason="", readiness=false. Elapsed: 4.809346ms +Mar 7 03:49:37.575: INFO: Pod "pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008339328s +Mar 7 03:49:39.575: INFO: Pod "pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008253861s +STEP: Saw pod success 03/07/23 03:49:39.575 +Mar 7 03:49:39.575: INFO: Pod "pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748" satisfied condition "Succeeded or Failed" +Mar 7 03:49:39.577: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748 container projected-configmap-volume-test: +STEP: delete the pod 03/07/23 03:49:39.583 +Mar 7 03:49:39.592: INFO: Waiting for pod pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748 to disappear +Mar 7 03:49:39.594: INFO: Pod pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 +Mar 7 03:49:39.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-475" for this suite. 03/07/23 03:49:39.597 +{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","completed":283,"skipped":5062,"failed":0} +------------------------------ +• [4.086 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:374 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:35.515 + Mar 7 03:49:35.515: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:49:35.516 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:35.536 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:35.54 + [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:374 + STEP: Creating configMap with name projected-configmap-test-volume-e893a91c-9216-4a79-9673-9cfabf36ff59 03/07/23 03:49:35.545 + STEP: Creating a pod to test consume configMaps 03/07/23 03:49:35.557 + Mar 7 03:49:35.567: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748" in namespace "projected-475" to be "Succeeded or Failed" + Mar 7 03:49:35.572: INFO: Pod "pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748": Phase="Pending", Reason="", readiness=false. Elapsed: 4.809346ms + Mar 7 03:49:37.575: INFO: Pod "pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008339328s + Mar 7 03:49:39.575: INFO: Pod "pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008253861s + STEP: Saw pod success 03/07/23 03:49:39.575 + Mar 7 03:49:39.575: INFO: Pod "pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748" satisfied condition "Succeeded or Failed" + Mar 7 03:49:39.577: INFO: Trying to get logs from node node-2 pod pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748 container projected-configmap-volume-test: + STEP: delete the pod 03/07/23 03:49:39.583 + Mar 7 03:49:39.592: INFO: Waiting for pod pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748 to disappear + Mar 7 03:49:39.594: INFO: Pod pod-projected-configmaps-5765e3db-12f8-4313-a651-21c308cb3748 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/framework.go:187 + Mar 7 03:49:39.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-475" for this suite. 03/07/23 03:49:39.597 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Containers + should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:58 +[BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:39.602 +Mar 7 03:49:39.602: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename containers 03/07/23 03:49:39.603 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:39.615 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:39.617 +[It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:58 +STEP: Creating a pod to test override arguments 03/07/23 03:49:39.619 +Mar 7 03:49:39.625: INFO: Waiting up to 5m0s for pod "client-containers-2907881f-483d-4702-9278-fb453fa2175c" in namespace "containers-896" to be "Succeeded or Failed" +Mar 7 03:49:39.628: INFO: Pod "client-containers-2907881f-483d-4702-9278-fb453fa2175c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56388ms +Mar 7 03:49:41.632: INFO: Pod "client-containers-2907881f-483d-4702-9278-fb453fa2175c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006566764s +Mar 7 03:49:43.633: INFO: Pod "client-containers-2907881f-483d-4702-9278-fb453fa2175c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007315302s +STEP: Saw pod success 03/07/23 03:49:43.633 +Mar 7 03:49:43.633: INFO: Pod "client-containers-2907881f-483d-4702-9278-fb453fa2175c" satisfied condition "Succeeded or Failed" +Mar 7 03:49:43.637: INFO: Trying to get logs from node node-2 pod client-containers-2907881f-483d-4702-9278-fb453fa2175c container agnhost-container: +STEP: delete the pod 03/07/23 03:49:43.643 +Mar 7 03:49:43.660: INFO: Waiting for pod client-containers-2907881f-483d-4702-9278-fb453fa2175c to disappear +Mar 7 03:49:43.663: INFO: Pod client-containers-2907881f-483d-4702-9278-fb453fa2175c no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:187 +Mar 7 03:49:43.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-896" for this suite. 03/07/23 03:49:43.668 +{"msg":"PASSED [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]","completed":284,"skipped":5067,"failed":0} +------------------------------ +• [4.073 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:58 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:39.602 + Mar 7 03:49:39.602: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename containers 03/07/23 03:49:39.603 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:39.615 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:39.617 + [It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:58 + STEP: Creating a pod to test override arguments 03/07/23 03:49:39.619 + Mar 7 03:49:39.625: INFO: Waiting up to 5m0s for pod "client-containers-2907881f-483d-4702-9278-fb453fa2175c" in namespace "containers-896" to be "Succeeded or Failed" + Mar 7 03:49:39.628: INFO: Pod "client-containers-2907881f-483d-4702-9278-fb453fa2175c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56388ms + Mar 7 03:49:41.632: INFO: Pod "client-containers-2907881f-483d-4702-9278-fb453fa2175c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006566764s + Mar 7 03:49:43.633: INFO: Pod "client-containers-2907881f-483d-4702-9278-fb453fa2175c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007315302s + STEP: Saw pod success 03/07/23 03:49:43.633 + Mar 7 03:49:43.633: INFO: Pod "client-containers-2907881f-483d-4702-9278-fb453fa2175c" satisfied condition "Succeeded or Failed" + Mar 7 03:49:43.637: INFO: Trying to get logs from node node-2 pod client-containers-2907881f-483d-4702-9278-fb453fa2175c container agnhost-container: + STEP: delete the pod 03/07/23 03:49:43.643 + Mar 7 03:49:43.660: INFO: Waiting for pod client-containers-2907881f-483d-4702-9278-fb453fa2175c to disappear + Mar 7 03:49:43.663: INFO: Pod client-containers-2907881f-483d-4702-9278-fb453fa2175c no longer exists + [AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:187 + Mar 7 03:49:43.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "containers-896" for this suite. 03/07/23 03:49:43.668 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:266 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:43.675 +Mar 7 03:49:43.675: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:49:43.677 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:43.697 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:43.699 +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:266 +STEP: Creating a pod to test downward api env vars 03/07/23 03:49:43.701 +Mar 7 03:49:43.706: INFO: Waiting up to 5m0s for pod "downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb" in namespace "downward-api-9157" to be "Succeeded or Failed" +Mar 7 03:49:43.711: INFO: Pod "downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.682898ms +Mar 7 03:49:45.715: INFO: Pod "downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00806377s +Mar 7 03:49:47.715: INFO: Pod "downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008788719s +STEP: Saw pod success 03/07/23 03:49:47.715 +Mar 7 03:49:47.715: INFO: Pod "downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb" satisfied condition "Succeeded or Failed" +Mar 7 03:49:47.717: INFO: Trying to get logs from node node-2 pod downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb container dapi-container: +STEP: delete the pod 03/07/23 03:49:47.724 +Mar 7 03:49:47.769: INFO: Waiting for pod downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb to disappear +Mar 7 03:49:47.771: INFO: Pod downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 +Mar 7 03:49:47.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9157" for this suite. 03/07/23 03:49:47.774 +{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","completed":285,"skipped":5067,"failed":0} +------------------------------ +• [4.103 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:266 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:43.675 + Mar 7 03:49:43.675: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:49:43.677 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:43.697 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:43.699 + [It] should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:266 + STEP: Creating a pod to test downward api env vars 03/07/23 03:49:43.701 + Mar 7 03:49:43.706: INFO: Waiting up to 5m0s for pod "downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb" in namespace "downward-api-9157" to be "Succeeded or Failed" + Mar 7 03:49:43.711: INFO: Pod "downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.682898ms + Mar 7 03:49:45.715: INFO: Pod "downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00806377s + Mar 7 03:49:47.715: INFO: Pod "downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008788719s + STEP: Saw pod success 03/07/23 03:49:47.715 + Mar 7 03:49:47.715: INFO: Pod "downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb" satisfied condition "Succeeded or Failed" + Mar 7 03:49:47.717: INFO: Trying to get logs from node node-2 pod downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb container dapi-container: + STEP: delete the pod 03/07/23 03:49:47.724 + Mar 7 03:49:47.769: INFO: Waiting for pod downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb to disappear + Mar 7 03:49:47.771: INFO: Pod downward-api-7e7104cc-151d-484c-928a-7eaa5b4770fb no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 + Mar 7 03:49:47.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-9157" for this suite. 03/07/23 03:49:47.774 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:78 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:47.779 +Mar 7 03:49:47.779: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:49:47.78 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:47.792 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:47.794 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:78 +STEP: Creating secret with name secret-test-map-a638a6c1-ad81-4641-9420-7d3446b8f872 03/07/23 03:49:47.796 +STEP: Creating a pod to test consume secrets 03/07/23 03:49:47.799 +Mar 7 03:49:47.809: INFO: Waiting up to 5m0s for pod "pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03" in namespace "secrets-4849" to be "Succeeded or Failed" +Mar 7 03:49:47.811: INFO: Pod "pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03": Phase="Pending", Reason="", readiness=false. Elapsed: 1.854777ms +Mar 7 03:49:49.814: INFO: Pod "pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005210263s +Mar 7 03:49:51.815: INFO: Pod "pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005833865s +STEP: Saw pod success 03/07/23 03:49:51.815 +Mar 7 03:49:51.815: INFO: Pod "pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03" satisfied condition "Succeeded or Failed" +Mar 7 03:49:51.817: INFO: Trying to get logs from node node-2 pod pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03 container secret-volume-test: +STEP: delete the pod 03/07/23 03:49:51.83 +Mar 7 03:49:51.838: INFO: Waiting for pod pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03 to disappear +Mar 7 03:49:51.840: INFO: Pod pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:49:51.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4849" for this suite. 03/07/23 03:49:51.843 +{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","completed":286,"skipped":5076,"failed":0} +------------------------------ +• [4.068 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:78 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:47.779 + Mar 7 03:49:47.779: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:49:47.78 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:47.792 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:47.794 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:78 + STEP: Creating secret with name secret-test-map-a638a6c1-ad81-4641-9420-7d3446b8f872 03/07/23 03:49:47.796 + STEP: Creating a pod to test consume secrets 03/07/23 03:49:47.799 + Mar 7 03:49:47.809: INFO: Waiting up to 5m0s for pod "pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03" in namespace "secrets-4849" to be "Succeeded or Failed" + Mar 7 03:49:47.811: INFO: Pod "pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03": Phase="Pending", Reason="", readiness=false. Elapsed: 1.854777ms + Mar 7 03:49:49.814: INFO: Pod "pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005210263s + Mar 7 03:49:51.815: INFO: Pod "pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005833865s + STEP: Saw pod success 03/07/23 03:49:51.815 + Mar 7 03:49:51.815: INFO: Pod "pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03" satisfied condition "Succeeded or Failed" + Mar 7 03:49:51.817: INFO: Trying to get logs from node node-2 pod pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03 container secret-volume-test: + STEP: delete the pod 03/07/23 03:49:51.83 + Mar 7 03:49:51.838: INFO: Waiting for pod pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03 to disappear + Mar 7 03:49:51.840: INFO: Pod pod-secrets-96b8677d-63a1-4d9c-a2b8-c4f238f4fe03 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:49:51.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-4849" for this suite. 03/07/23 03:49:51.843 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:215 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:51.848 +Mar 7 03:49:51.848: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-runtime 03/07/23 03:49:51.848 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:51.861 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:51.863 +[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:215 +STEP: create the container 03/07/23 03:49:51.865 +STEP: wait for the container to reach Failed 03/07/23 03:49:51.871 +STEP: get the container status 03/07/23 03:49:55.884 +STEP: the container should be terminated 03/07/23 03:49:55.886 +STEP: the termination message should be set 03/07/23 03:49:55.886 +Mar 7 03:49:55.886: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container 03/07/23 03:49:55.886 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 +Mar 7 03:49:55.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-5731" for this suite. 03/07/23 03:49:55.901 +{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","completed":287,"skipped":5086,"failed":0} +------------------------------ +• [4.057 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:43 + on terminated container + test/e2e/common/node/runtime.go:136 + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:215 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:51.848 + Mar 7 03:49:51.848: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-runtime 03/07/23 03:49:51.848 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:51.861 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:51.863 + [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:215 + STEP: create the container 03/07/23 03:49:51.865 + STEP: wait for the container to reach Failed 03/07/23 03:49:51.871 + STEP: get the container status 03/07/23 03:49:55.884 + STEP: the container should be terminated 03/07/23 03:49:55.886 + STEP: the termination message should be set 03/07/23 03:49:55.886 + Mar 7 03:49:55.886: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- + STEP: delete the container 03/07/23 03:49:55.886 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 + Mar 7 03:49:55.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-runtime-5731" for this suite. 03/07/23 03:49:55.901 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:960 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:49:55.906 +Mar 7 03:49:55.906: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:49:55.906 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:55.918 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:55.921 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:960 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-2 03/07/23 03:49:55.922 +Mar 7 03:49:55.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-4796 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Mar 7 03:49:56.022: INFO: stderr: "" +Mar 7 03:49:56.022: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run 03/07/23 03:49:56.022 +Mar 7 03:49:56.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-4796 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "registry.k8s.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' +Mar 7 03:49:57.305: INFO: stderr: "" +Mar 7 03:49:57.305: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/httpd:2.4.38-2 03/07/23 03:49:57.305 +Mar 7 03:49:57.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-4796 delete pods e2e-test-httpd-pod' +Mar 7 03:50:00.100: INFO: stderr: "" +Mar 7 03:50:00.100: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:50:00.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4796" for this suite. 03/07/23 03:50:00.104 +{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","completed":288,"skipped":5100,"failed":0} +------------------------------ +• [4.202 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl server-side dry-run + test/e2e/kubectl/kubectl.go:954 + should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:960 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:49:55.906 + Mar 7 03:49:55.906: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:49:55.906 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:49:55.918 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:49:55.921 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:960 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-2 03/07/23 03:49:55.922 + Mar 7 03:49:55.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-4796 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' + Mar 7 03:49:56.022: INFO: stderr: "" + Mar 7 03:49:56.022: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: replace the image in the pod with server-side dry-run 03/07/23 03:49:56.022 + Mar 7 03:49:56.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-4796 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "registry.k8s.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' + Mar 7 03:49:57.305: INFO: stderr: "" + Mar 7 03:49:57.305: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" + STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/httpd:2.4.38-2 03/07/23 03:49:57.305 + Mar 7 03:49:57.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-4796 delete pods e2e-test-httpd-pod' + Mar 7 03:50:00.100: INFO: stderr: "" + Mar 7 03:50:00.100: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:50:00.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-4796" for this suite. 03/07/23 03:50:00.104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:89 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:00.111 +Mar 7 03:50:00.111: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:50:00.112 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:00.124 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:00.126 +[It] should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:89 +STEP: Creating a pod to test downward api env vars 03/07/23 03:50:00.128 +Mar 7 03:50:00.134: INFO: Waiting up to 5m0s for pod "downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5" in namespace "downward-api-7069" to be "Succeeded or Failed" +Mar 7 03:50:00.137: INFO: Pod "downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324303ms +Mar 7 03:50:02.141: INFO: Pod "downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00634753s +Mar 7 03:50:04.140: INFO: Pod "downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005668403s +STEP: Saw pod success 03/07/23 03:50:04.14 +Mar 7 03:50:04.140: INFO: Pod "downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5" satisfied condition "Succeeded or Failed" +Mar 7 03:50:04.143: INFO: Trying to get logs from node node-2 pod downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5 container dapi-container: +STEP: delete the pod 03/07/23 03:50:04.148 +Mar 7 03:50:04.157: INFO: Waiting for pod downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5 to disappear +Mar 7 03:50:04.159: INFO: Pod downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 +Mar 7 03:50:04.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7069" for this suite. 03/07/23 03:50:04.162 +{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","completed":289,"skipped":5138,"failed":0} +------------------------------ +• [4.056 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:89 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:00.111 + Mar 7 03:50:00.111: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:50:00.112 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:00.124 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:00.126 + [It] should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:89 + STEP: Creating a pod to test downward api env vars 03/07/23 03:50:00.128 + Mar 7 03:50:00.134: INFO: Waiting up to 5m0s for pod "downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5" in namespace "downward-api-7069" to be "Succeeded or Failed" + Mar 7 03:50:00.137: INFO: Pod "downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324303ms + Mar 7 03:50:02.141: INFO: Pod "downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00634753s + Mar 7 03:50:04.140: INFO: Pod "downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005668403s + STEP: Saw pod success 03/07/23 03:50:04.14 + Mar 7 03:50:04.140: INFO: Pod "downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5" satisfied condition "Succeeded or Failed" + Mar 7 03:50:04.143: INFO: Trying to get logs from node node-2 pod downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5 container dapi-container: + STEP: delete the pod 03/07/23 03:50:04.148 + Mar 7 03:50:04.157: INFO: Waiting for pod downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5 to disappear + Mar 7 03:50:04.159: INFO: Pod downward-api-31c78895-cba9-4ff2-9ed7-9eff69942ac5 no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/framework.go:187 + Mar 7 03:50:04.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-7069" for this suite. 03/07/23 03:50:04.162 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:196 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:04.168 +Mar 7 03:50:04.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:50:04.169 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:04.182 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:04.185 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:50:04.198 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:50:04.625 +STEP: Deploying the webhook pod 03/07/23 03:50:04.655 +STEP: Wait for the deployment to be ready 03/07/23 03:50:04.685 +Mar 7 03:50:04.725: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 03:50:06.732 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:50:06.766 +Mar 7 03:50:07.767: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:196 +STEP: Registering the webhook via the AdmissionRegistration API 03/07/23 03:50:07.769 +STEP: create a pod that should be denied by the webhook 03/07/23 03:50:07.781 +STEP: create a pod that causes the webhook to hang 03/07/23 03:50:07.791 +STEP: create a configmap that should be denied by the webhook 03/07/23 03:50:17.795 +STEP: create a configmap that should be admitted by the webhook 03/07/23 03:50:17.801 +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook 03/07/23 03:50:17.808 +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook 03/07/23 03:50:17.814 +STEP: create a namespace that bypass the webhook 03/07/23 03:50:17.818 +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace 03/07/23 03:50:17.822 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:50:17.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7435" for this suite. 03/07/23 03:50:17.847 +STEP: Destroying namespace "webhook-7435-markers" for this suite. 03/07/23 03:50:17.851 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","completed":290,"skipped":5157,"failed":0} +------------------------------ +• [SLOW TEST] [13.737 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:196 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:04.168 + Mar 7 03:50:04.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:50:04.169 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:04.182 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:04.185 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:50:04.198 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:50:04.625 + STEP: Deploying the webhook pod 03/07/23 03:50:04.655 + STEP: Wait for the deployment to be ready 03/07/23 03:50:04.685 + Mar 7 03:50:04.725: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 03:50:06.732 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:50:06.766 + Mar 7 03:50:07.767: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:196 + STEP: Registering the webhook via the AdmissionRegistration API 03/07/23 03:50:07.769 + STEP: create a pod that should be denied by the webhook 03/07/23 03:50:07.781 + STEP: create a pod that causes the webhook to hang 03/07/23 03:50:07.791 + STEP: create a configmap that should be denied by the webhook 03/07/23 03:50:17.795 + STEP: create a configmap that should be admitted by the webhook 03/07/23 03:50:17.801 + STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook 03/07/23 03:50:17.808 + STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook 03/07/23 03:50:17.814 + STEP: create a namespace that bypass the webhook 03/07/23 03:50:17.818 + STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace 03/07/23 03:50:17.822 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:50:17.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-7435" for this suite. 03/07/23 03:50:17.847 + STEP: Destroying namespace "webhook-7435-markers" for this suite. 03/07/23 03:50:17.851 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1810 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:17.908 +Mar 7 03:50:17.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubectl 03/07/23 03:50:17.909 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:17.935 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:17.938 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 +[It] should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1810 +STEP: Starting the proxy 03/07/23 03:50:17.939 +Mar 7 03:50:17.940: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-6840 proxy --unix-socket=/tmp/kubectl-proxy-unix3751206635/test' +STEP: retrieving proxy /api/ output 03/07/23 03:50:18.015 +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 +Mar 7 03:50:18.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6840" for this suite. 03/07/23 03:50:18.02 +{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","completed":291,"skipped":5210,"failed":0} +------------------------------ +• [0.117 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Proxy server + test/e2e/kubectl/kubectl.go:1778 + should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1810 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:17.908 + Mar 7 03:50:17.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubectl 03/07/23 03:50:17.909 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:17.935 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:17.938 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:272 + [It] should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1810 + STEP: Starting the proxy 03/07/23 03:50:17.939 + Mar 7 03:50:17.940: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=kubectl-6840 proxy --unix-socket=/tmp/kubectl-proxy-unix3751206635/test' + STEP: retrieving proxy /api/ output 03/07/23 03:50:18.015 + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/framework.go:187 + Mar 7 03:50:18.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubectl-6840" for this suite. 03/07/23 03:50:18.02 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:846 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:18.027 +Mar 7 03:50:18.027: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename statefulset 03/07/23 03:50:18.027 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:18.049 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:18.057 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-7681 03/07/23 03:50:18.066 +[It] should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:846 +STEP: Creating statefulset ss in namespace statefulset-7681 03/07/23 03:50:18.075 +Mar 7 03:50:18.083: INFO: Found 0 stateful pods, waiting for 1 +Mar 7 03:50:28.087: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource 03/07/23 03:50:28.091 +STEP: updating a scale subresource 03/07/23 03:50:28.093 +STEP: verifying the statefulset Spec.Replicas was modified 03/07/23 03:50:28.134 +STEP: Patch a scale subresource 03/07/23 03:50:28.136 +STEP: verifying the statefulset Spec.Replicas was modified 03/07/23 03:50:28.14 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Mar 7 03:50:28.143: INFO: Deleting all statefulset in ns statefulset-7681 +Mar 7 03:50:28.146: INFO: Scaling statefulset ss to 0 +Mar 7 03:50:38.160: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 03:50:38.162: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +Mar 7 03:50:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7681" for this suite. 03/07/23 03:50:38.201 +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","completed":292,"skipped":5248,"failed":0} +------------------------------ +• [SLOW TEST] [20.179 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:846 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:18.027 + Mar 7 03:50:18.027: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename statefulset 03/07/23 03:50:18.027 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:18.049 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:18.057 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 + STEP: Creating service test in namespace statefulset-7681 03/07/23 03:50:18.066 + [It] should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:846 + STEP: Creating statefulset ss in namespace statefulset-7681 03/07/23 03:50:18.075 + Mar 7 03:50:18.083: INFO: Found 0 stateful pods, waiting for 1 + Mar 7 03:50:28.087: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: getting scale subresource 03/07/23 03:50:28.091 + STEP: updating a scale subresource 03/07/23 03:50:28.093 + STEP: verifying the statefulset Spec.Replicas was modified 03/07/23 03:50:28.134 + STEP: Patch a scale subresource 03/07/23 03:50:28.136 + STEP: verifying the statefulset Spec.Replicas was modified 03/07/23 03:50:28.14 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 + Mar 7 03:50:28.143: INFO: Deleting all statefulset in ns statefulset-7681 + Mar 7 03:50:28.146: INFO: Scaling statefulset ss to 0 + Mar 7 03:50:38.160: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 03:50:38.162: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 + Mar 7 03:50:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "statefulset-7681" for this suite. 03/07/23 03:50:38.201 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:38.207 +Mar 7 03:50:38.207: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts 03/07/23 03:50:38.208 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:38.224 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:38.226 +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 +STEP: Setting up the test 03/07/23 03:50:38.229 +STEP: Creating hostNetwork=false pod 03/07/23 03:50:38.229 +Mar 7 03:50:38.235: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "e2e-kubelet-etc-hosts-2987" to be "running and ready" +Mar 7 03:50:38.239: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.044312ms +Mar 7 03:50:38.239: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:50:40.242: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.00610563s +Mar 7 03:50:40.242: INFO: The phase of Pod test-pod is Running (Ready = true) +Mar 7 03:50:40.242: INFO: Pod "test-pod" satisfied condition "running and ready" +STEP: Creating hostNetwork=true pod 03/07/23 03:50:40.244 +Mar 7 03:50:40.250: INFO: Waiting up to 5m0s for pod "test-host-network-pod" in namespace "e2e-kubelet-etc-hosts-2987" to be "running and ready" +Mar 7 03:50:40.254: INFO: Pod "test-host-network-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.83759ms +Mar 7 03:50:40.254: INFO: The phase of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:50:42.258: INFO: Pod "test-host-network-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008286948s +Mar 7 03:50:42.258: INFO: The phase of Pod test-host-network-pod is Running (Ready = true) +Mar 7 03:50:42.258: INFO: Pod "test-host-network-pod" satisfied condition "running and ready" +STEP: Running the test 03/07/23 03:50:42.26 +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false 03/07/23 03:50:42.26 +Mar 7 03:50:42.260: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:50:42.260: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:50:42.261: INFO: ExecWithOptions: Clientset creation +Mar 7 03:50:42.261: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Mar 7 03:50:42.320: INFO: Exec stderr: "" +Mar 7 03:50:42.320: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:50:42.320: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:50:42.321: INFO: ExecWithOptions: Clientset creation +Mar 7 03:50:42.321: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Mar 7 03:50:42.393: INFO: Exec stderr: "" +Mar 7 03:50:42.393: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:50:42.393: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:50:42.393: INFO: ExecWithOptions: Clientset creation +Mar 7 03:50:42.393: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Mar 7 03:50:42.450: INFO: Exec stderr: "" +Mar 7 03:50:42.450: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:50:42.450: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:50:42.451: INFO: ExecWithOptions: Clientset creation +Mar 7 03:50:42.451: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Mar 7 03:50:42.505: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount 03/07/23 03:50:42.505 +Mar 7 03:50:42.505: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:50:42.505: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:50:42.505: INFO: ExecWithOptions: Clientset creation +Mar 7 03:50:42.505: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) +Mar 7 03:50:42.568: INFO: Exec stderr: "" +Mar 7 03:50:42.568: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:50:42.568: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:50:42.569: INFO: ExecWithOptions: Clientset creation +Mar 7 03:50:42.569: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) +Mar 7 03:50:42.635: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true 03/07/23 03:50:42.635 +Mar 7 03:50:42.635: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:50:42.635: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:50:42.635: INFO: ExecWithOptions: Clientset creation +Mar 7 03:50:42.635: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Mar 7 03:50:42.690: INFO: Exec stderr: "" +Mar 7 03:50:42.690: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:50:42.690: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:50:42.691: INFO: ExecWithOptions: Clientset creation +Mar 7 03:50:42.691: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Mar 7 03:50:42.740: INFO: Exec stderr: "" +Mar 7 03:50:42.740: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:50:42.740: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:50:42.741: INFO: ExecWithOptions: Clientset creation +Mar 7 03:50:42.741: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Mar 7 03:50:42.801: INFO: Exec stderr: "" +Mar 7 03:50:42.801: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:50:42.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:50:42.802: INFO: ExecWithOptions: Clientset creation +Mar 7 03:50:42.802: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Mar 7 03:50:42.856: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/framework.go:187 +Mar 7 03:50:42.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-2987" for this suite. 03/07/23 03:50:42.86 +{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","completed":293,"skipped":5293,"failed":0} +------------------------------ +• [4.657 seconds] +[sig-node] KubeletManagedEtcHosts +test/e2e/common/node/framework.go:23 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:38.207 + Mar 7 03:50:38.207: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts 03/07/23 03:50:38.208 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:38.224 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:38.226 + [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 + STEP: Setting up the test 03/07/23 03:50:38.229 + STEP: Creating hostNetwork=false pod 03/07/23 03:50:38.229 + Mar 7 03:50:38.235: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "e2e-kubelet-etc-hosts-2987" to be "running and ready" + Mar 7 03:50:38.239: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.044312ms + Mar 7 03:50:38.239: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:50:40.242: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.00610563s + Mar 7 03:50:40.242: INFO: The phase of Pod test-pod is Running (Ready = true) + Mar 7 03:50:40.242: INFO: Pod "test-pod" satisfied condition "running and ready" + STEP: Creating hostNetwork=true pod 03/07/23 03:50:40.244 + Mar 7 03:50:40.250: INFO: Waiting up to 5m0s for pod "test-host-network-pod" in namespace "e2e-kubelet-etc-hosts-2987" to be "running and ready" + Mar 7 03:50:40.254: INFO: Pod "test-host-network-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.83759ms + Mar 7 03:50:40.254: INFO: The phase of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:50:42.258: INFO: Pod "test-host-network-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008286948s + Mar 7 03:50:42.258: INFO: The phase of Pod test-host-network-pod is Running (Ready = true) + Mar 7 03:50:42.258: INFO: Pod "test-host-network-pod" satisfied condition "running and ready" + STEP: Running the test 03/07/23 03:50:42.26 + STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false 03/07/23 03:50:42.26 + Mar 7 03:50:42.260: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:50:42.260: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:50:42.261: INFO: ExecWithOptions: Clientset creation + Mar 7 03:50:42.261: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Mar 7 03:50:42.320: INFO: Exec stderr: "" + Mar 7 03:50:42.320: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:50:42.320: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:50:42.321: INFO: ExecWithOptions: Clientset creation + Mar 7 03:50:42.321: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Mar 7 03:50:42.393: INFO: Exec stderr: "" + Mar 7 03:50:42.393: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:50:42.393: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:50:42.393: INFO: ExecWithOptions: Clientset creation + Mar 7 03:50:42.393: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Mar 7 03:50:42.450: INFO: Exec stderr: "" + Mar 7 03:50:42.450: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:50:42.450: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:50:42.451: INFO: ExecWithOptions: Clientset creation + Mar 7 03:50:42.451: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Mar 7 03:50:42.505: INFO: Exec stderr: "" + STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount 03/07/23 03:50:42.505 + Mar 7 03:50:42.505: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:50:42.505: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:50:42.505: INFO: ExecWithOptions: Clientset creation + Mar 7 03:50:42.505: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) + Mar 7 03:50:42.568: INFO: Exec stderr: "" + Mar 7 03:50:42.568: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:50:42.568: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:50:42.569: INFO: ExecWithOptions: Clientset creation + Mar 7 03:50:42.569: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) + Mar 7 03:50:42.635: INFO: Exec stderr: "" + STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true 03/07/23 03:50:42.635 + Mar 7 03:50:42.635: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:50:42.635: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:50:42.635: INFO: ExecWithOptions: Clientset creation + Mar 7 03:50:42.635: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Mar 7 03:50:42.690: INFO: Exec stderr: "" + Mar 7 03:50:42.690: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:50:42.690: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:50:42.691: INFO: ExecWithOptions: Clientset creation + Mar 7 03:50:42.691: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Mar 7 03:50:42.740: INFO: Exec stderr: "" + Mar 7 03:50:42.740: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:50:42.740: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:50:42.741: INFO: ExecWithOptions: Clientset creation + Mar 7 03:50:42.741: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Mar 7 03:50:42.801: INFO: Exec stderr: "" + Mar 7 03:50:42.801: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2987 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:50:42.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:50:42.802: INFO: ExecWithOptions: Clientset creation + Mar 7 03:50:42.802: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-2987/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Mar 7 03:50:42.856: INFO: Exec stderr: "" + [AfterEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/framework.go:187 + Mar 7 03:50:42.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "e2e-kubelet-etc-hosts-2987" for this suite. 03/07/23 03:50:42.86 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:42.869 +Mar 7 03:50:42.869: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sysctl 03/07/23 03:50:42.87 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:42.882 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:42.884 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 +STEP: Creating a pod with one valid and two invalid sysctls 03/07/23 03:50:42.886 +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:187 +Mar 7 03:50:42.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-378" for this suite. 03/07/23 03:50:42.892 +{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","completed":294,"skipped":5347,"failed":0} +------------------------------ +• [0.027 seconds] +[sig-node] Sysctls [LinuxOnly] [NodeConformance] +test/e2e/common/node/framework.go:23 + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:42.869 + Mar 7 03:50:42.869: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sysctl 03/07/23 03:50:42.87 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:42.882 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:42.884 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 + [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 + STEP: Creating a pod with one valid and two invalid sysctls 03/07/23 03:50:42.886 + [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/framework.go:187 + Mar 7 03:50:42.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sysctl-378" for this suite. 03/07/23 03:50:42.892 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:42.896 +Mar 7 03:50:42.896: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubelet-test 03/07/23 03:50:42.897 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:42.909 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:42.911 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 +Mar 7 03:50:42.918: INFO: Waiting up to 5m0s for pod "busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198" in namespace "kubelet-test-8992" to be "running and ready" +Mar 7 03:50:42.921: INFO: Pod "busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25557ms +Mar 7 03:50:42.921: INFO: The phase of Pod busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:50:44.924: INFO: Pod "busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198": Phase="Running", Reason="", readiness=true. Elapsed: 2.0056272s +Mar 7 03:50:44.924: INFO: The phase of Pod busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198 is Running (Ready = true) +Mar 7 03:50:44.924: INFO: Pod "busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198" satisfied condition "running and ready" +[AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 +Mar 7 03:50:44.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-8992" for this suite. 03/07/23 03:50:44.934 +{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","completed":295,"skipped":5347,"failed":0} +------------------------------ +• [2.055 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a read only busybox container + test/e2e/common/node/kubelet.go:175 + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:42.896 + Mar 7 03:50:42.896: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubelet-test 03/07/23 03:50:42.897 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:42.909 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:42.911 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 + Mar 7 03:50:42.918: INFO: Waiting up to 5m0s for pod "busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198" in namespace "kubelet-test-8992" to be "running and ready" + Mar 7 03:50:42.921: INFO: Pod "busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25557ms + Mar 7 03:50:42.921: INFO: The phase of Pod busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:50:44.924: INFO: Pod "busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198": Phase="Running", Reason="", readiness=true. Elapsed: 2.0056272s + Mar 7 03:50:44.924: INFO: The phase of Pod busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198 is Running (Ready = true) + Mar 7 03:50:44.924: INFO: Pod "busybox-readonly-fs6f8dc0a4-6e1a-43e8-96f0-437830189198" satisfied condition "running and ready" + [AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 + Mar 7 03:50:44.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubelet-test-8992" for this suite. 03/07/23 03:50:44.934 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3415 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:44.955 +Mar 7 03:50:44.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:50:44.956 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:44.978 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:44.981 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3415 +STEP: creating a Service 03/07/23 03:50:44.986 +STEP: watching for the Service to be added 03/07/23 03:50:44.998 +Mar 7 03:50:45.000: INFO: Found Service test-service-x5td4 in namespace services-9220 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Mar 7 03:50:45.000: INFO: Service test-service-x5td4 created +STEP: Getting /status 03/07/23 03:50:45 +Mar 7 03:50:45.004: INFO: Service test-service-x5td4 has LoadBalancer: {[]} +STEP: patching the ServiceStatus 03/07/23 03:50:45.004 +STEP: watching for the Service to be patched 03/07/23 03:50:45.011 +Mar 7 03:50:45.013: INFO: observed Service test-service-x5td4 in namespace services-9220 with annotations: map[] & LoadBalancer: {[]} +Mar 7 03:50:45.013: INFO: Found Service test-service-x5td4 in namespace services-9220 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Mar 7 03:50:45.013: INFO: Service test-service-x5td4 has service status patched +STEP: updating the ServiceStatus 03/07/23 03:50:45.013 +Mar 7 03:50:45.022: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated 03/07/23 03:50:45.022 +Mar 7 03:50:45.024: INFO: Observed Service test-service-x5td4 in namespace services-9220 with annotations: map[] & Conditions: {[]} +Mar 7 03:50:45.024: INFO: Observed event: &Service{ObjectMeta:{test-service-x5td4 services-9220 6720ac6b-13f9-4ca5-bcd3-09735a31e34e 73282 0 2023-03-07 03:50:44 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-03-07 03:50:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-03-07 03:50:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.110.98.26,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.98.26],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Mar 7 03:50:45.024: INFO: Found Service test-service-x5td4 in namespace services-9220 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Mar 7 03:50:45.024: INFO: Service test-service-x5td4 has service status updated +STEP: patching the service 03/07/23 03:50:45.024 +STEP: watching for the Service to be patched 03/07/23 03:50:45.035 +Mar 7 03:50:45.037: INFO: observed Service test-service-x5td4 in namespace services-9220 with labels: map[test-service-static:true] +Mar 7 03:50:45.037: INFO: observed Service test-service-x5td4 in namespace services-9220 with labels: map[test-service-static:true] +Mar 7 03:50:45.037: INFO: observed Service test-service-x5td4 in namespace services-9220 with labels: map[test-service-static:true] +Mar 7 03:50:45.037: INFO: Found Service test-service-x5td4 in namespace services-9220 with labels: map[test-service:patched test-service-static:true] +Mar 7 03:50:45.037: INFO: Service test-service-x5td4 patched +STEP: deleting the service 03/07/23 03:50:45.037 +STEP: watching for the Service to be deleted 03/07/23 03:50:45.064 +Mar 7 03:50:45.066: INFO: Observed event: ADDED +Mar 7 03:50:45.066: INFO: Observed event: MODIFIED +Mar 7 03:50:45.066: INFO: Observed event: MODIFIED +Mar 7 03:50:45.066: INFO: Observed event: MODIFIED +Mar 7 03:50:45.066: INFO: Found Service test-service-x5td4 in namespace services-9220 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Mar 7 03:50:45.066: INFO: Service test-service-x5td4 deleted +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:50:45.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9220" for this suite. 03/07/23 03:50:45.069 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","completed":296,"skipped":5440,"failed":0} +------------------------------ +• [0.119 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3415 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:44.955 + Mar 7 03:50:44.956: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:50:44.956 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:44.978 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:44.981 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3415 + STEP: creating a Service 03/07/23 03:50:44.986 + STEP: watching for the Service to be added 03/07/23 03:50:44.998 + Mar 7 03:50:45.000: INFO: Found Service test-service-x5td4 in namespace services-9220 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] + Mar 7 03:50:45.000: INFO: Service test-service-x5td4 created + STEP: Getting /status 03/07/23 03:50:45 + Mar 7 03:50:45.004: INFO: Service test-service-x5td4 has LoadBalancer: {[]} + STEP: patching the ServiceStatus 03/07/23 03:50:45.004 + STEP: watching for the Service to be patched 03/07/23 03:50:45.011 + Mar 7 03:50:45.013: INFO: observed Service test-service-x5td4 in namespace services-9220 with annotations: map[] & LoadBalancer: {[]} + Mar 7 03:50:45.013: INFO: Found Service test-service-x5td4 in namespace services-9220 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} + Mar 7 03:50:45.013: INFO: Service test-service-x5td4 has service status patched + STEP: updating the ServiceStatus 03/07/23 03:50:45.013 + Mar 7 03:50:45.022: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the Service to be updated 03/07/23 03:50:45.022 + Mar 7 03:50:45.024: INFO: Observed Service test-service-x5td4 in namespace services-9220 with annotations: map[] & Conditions: {[]} + Mar 7 03:50:45.024: INFO: Observed event: &Service{ObjectMeta:{test-service-x5td4 services-9220 6720ac6b-13f9-4ca5-bcd3-09735a31e34e 73282 0 2023-03-07 03:50:44 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-03-07 03:50:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-03-07 03:50:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.110.98.26,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.98.26],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} + Mar 7 03:50:45.024: INFO: Found Service test-service-x5td4 in namespace services-9220 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Mar 7 03:50:45.024: INFO: Service test-service-x5td4 has service status updated + STEP: patching the service 03/07/23 03:50:45.024 + STEP: watching for the Service to be patched 03/07/23 03:50:45.035 + Mar 7 03:50:45.037: INFO: observed Service test-service-x5td4 in namespace services-9220 with labels: map[test-service-static:true] + Mar 7 03:50:45.037: INFO: observed Service test-service-x5td4 in namespace services-9220 with labels: map[test-service-static:true] + Mar 7 03:50:45.037: INFO: observed Service test-service-x5td4 in namespace services-9220 with labels: map[test-service-static:true] + Mar 7 03:50:45.037: INFO: Found Service test-service-x5td4 in namespace services-9220 with labels: map[test-service:patched test-service-static:true] + Mar 7 03:50:45.037: INFO: Service test-service-x5td4 patched + STEP: deleting the service 03/07/23 03:50:45.037 + STEP: watching for the Service to be deleted 03/07/23 03:50:45.064 + Mar 7 03:50:45.066: INFO: Observed event: ADDED + Mar 7 03:50:45.066: INFO: Observed event: MODIFIED + Mar 7 03:50:45.066: INFO: Observed event: MODIFIED + Mar 7 03:50:45.066: INFO: Observed event: MODIFIED + Mar 7 03:50:45.066: INFO: Found Service test-service-x5td4 in namespace services-9220 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] + Mar 7 03:50:45.066: INFO: Service test-service-x5td4 deleted + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:50:45.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-9220" for this suite. 03/07/23 03:50:45.069 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:45.076 +Mar 7 03:50:45.076: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename podtemplate 03/07/23 03:50:45.076 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:45.092 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:45.095 +[It] should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 +STEP: Create set of pod templates 03/07/23 03:50:45.097 +Mar 7 03:50:45.102: INFO: created test-podtemplate-1 +Mar 7 03:50:45.106: INFO: created test-podtemplate-2 +Mar 7 03:50:45.110: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace 03/07/23 03:50:45.11 +STEP: delete collection of pod templates 03/07/23 03:50:45.113 +Mar 7 03:50:45.114: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity 03/07/23 03:50:45.127 +Mar 7 03:50:45.127: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:187 +Mar 7 03:50:45.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-7295" for this suite. 03/07/23 03:50:45.132 +{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","completed":297,"skipped":5468,"failed":0} +------------------------------ +• [0.062 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:45.076 + Mar 7 03:50:45.076: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename podtemplate 03/07/23 03:50:45.076 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:45.092 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:45.095 + [It] should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 + STEP: Create set of pod templates 03/07/23 03:50:45.097 + Mar 7 03:50:45.102: INFO: created test-podtemplate-1 + Mar 7 03:50:45.106: INFO: created test-podtemplate-2 + Mar 7 03:50:45.110: INFO: created test-podtemplate-3 + STEP: get a list of pod templates with a label in the current namespace 03/07/23 03:50:45.11 + STEP: delete collection of pod templates 03/07/23 03:50:45.113 + Mar 7 03:50:45.114: INFO: requesting DeleteCollection of pod templates + STEP: check that the list of pod templates matches the requested quantity 03/07/23 03:50:45.127 + Mar 7 03:50:45.127: INFO: requesting list of pod templates to confirm quantity + [AfterEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:187 + Mar 7 03:50:45.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "podtemplate-7295" for this suite. 03/07/23 03:50:45.132 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:45.138 +Mar 7 03:50:45.138: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename watch 03/07/23 03:50:45.139 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:45.154 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:45.157 +[It] should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 +STEP: creating a new configmap 03/07/23 03:50:45.159 +STEP: modifying the configmap once 03/07/23 03:50:45.162 +STEP: modifying the configmap a second time 03/07/23 03:50:45.168 +STEP: deleting the configmap 03/07/23 03:50:45.173 +STEP: creating a watch on configmaps from the resource version returned by the first update 03/07/23 03:50:45.177 +STEP: Expecting to observe notifications for all changes to the configmap after the first update 03/07/23 03:50:45.178 +Mar 7 03:50:45.178: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4575 8ec8190e-f953-4f45-8359-3a6cfec1f991 73303 0 2023-03-07 03:50:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-03-07 03:50:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:50:45.178: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4575 8ec8190e-f953-4f45-8359-3a6cfec1f991 73304 0 2023-03-07 03:50:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-03-07 03:50:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 +Mar 7 03:50:45.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-4575" for this suite. 03/07/23 03:50:45.181 +{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","completed":298,"skipped":5469,"failed":0} +------------------------------ +• [0.048 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:45.138 + Mar 7 03:50:45.138: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename watch 03/07/23 03:50:45.139 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:45.154 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:45.157 + [It] should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 + STEP: creating a new configmap 03/07/23 03:50:45.159 + STEP: modifying the configmap once 03/07/23 03:50:45.162 + STEP: modifying the configmap a second time 03/07/23 03:50:45.168 + STEP: deleting the configmap 03/07/23 03:50:45.173 + STEP: creating a watch on configmaps from the resource version returned by the first update 03/07/23 03:50:45.177 + STEP: Expecting to observe notifications for all changes to the configmap after the first update 03/07/23 03:50:45.178 + Mar 7 03:50:45.178: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4575 8ec8190e-f953-4f45-8359-3a6cfec1f991 73303 0 2023-03-07 03:50:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-03-07 03:50:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:50:45.178: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4575 8ec8190e-f953-4f45-8359-3a6cfec1f991 73304 0 2023-03-07 03:50:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-03-07 03:50:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 + Mar 7 03:50:45.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "watch-4575" for this suite. 03/07/23 03:50:45.181 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:844 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:45.186 +Mar 7 03:50:45.186: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 03:50:45.187 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:45.201 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:45.203 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 +[It] should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:844 +STEP: Create set of pods 03/07/23 03:50:45.205 +Mar 7 03:50:45.211: INFO: created test-pod-1 +Mar 7 03:50:45.215: INFO: created test-pod-2 +Mar 7 03:50:45.222: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be running 03/07/23 03:50:45.222 +Mar 7 03:50:45.222: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-9644' to be running and ready +Mar 7 03:50:45.236: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Mar 7 03:50:45.236: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Mar 7 03:50:45.236: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Mar 7 03:50:45.236: INFO: 0 / 3 pods in namespace 'pods-9644' are running and ready (0 seconds elapsed) +Mar 7 03:50:45.236: INFO: expected 0 pod replicas in namespace 'pods-9644', 0 are Running and Ready. +Mar 7 03:50:45.236: INFO: POD NODE PHASE GRACE CONDITIONS +Mar 7 03:50:45.236: INFO: test-pod-1 node-2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC }] +Mar 7 03:50:45.237: INFO: test-pod-2 node-2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC }] +Mar 7 03:50:45.237: INFO: test-pod-3 node-2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC }] +Mar 7 03:50:45.237: INFO: +Mar 7 03:50:47.245: INFO: 3 / 3 pods in namespace 'pods-9644' are running and ready (2 seconds elapsed) +Mar 7 03:50:47.245: INFO: expected 0 pod replicas in namespace 'pods-9644', 0 are Running and Ready. +STEP: waiting for all pods to be deleted 03/07/23 03:50:47.264 +Mar 7 03:50:47.268: INFO: Pod quantity 3 is different from expected quantity 0 +Mar 7 03:50:48.276: INFO: Pod quantity 3 is different from expected quantity 0 +Mar 7 03:50:49.273: INFO: Pod quantity 3 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +Mar 7 03:50:50.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9644" for this suite. 03/07/23 03:50:50.274 +{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","completed":299,"skipped":5471,"failed":0} +------------------------------ +• [SLOW TEST] [5.093 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:844 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:45.186 + Mar 7 03:50:45.186: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 03:50:45.187 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:45.201 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:45.203 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 + [It] should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:844 + STEP: Create set of pods 03/07/23 03:50:45.205 + Mar 7 03:50:45.211: INFO: created test-pod-1 + Mar 7 03:50:45.215: INFO: created test-pod-2 + Mar 7 03:50:45.222: INFO: created test-pod-3 + STEP: waiting for all 3 pods to be running 03/07/23 03:50:45.222 + Mar 7 03:50:45.222: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-9644' to be running and ready + Mar 7 03:50:45.236: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Mar 7 03:50:45.236: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Mar 7 03:50:45.236: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Mar 7 03:50:45.236: INFO: 0 / 3 pods in namespace 'pods-9644' are running and ready (0 seconds elapsed) + Mar 7 03:50:45.236: INFO: expected 0 pod replicas in namespace 'pods-9644', 0 are Running and Ready. + Mar 7 03:50:45.236: INFO: POD NODE PHASE GRACE CONDITIONS + Mar 7 03:50:45.236: INFO: test-pod-1 node-2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC }] + Mar 7 03:50:45.237: INFO: test-pod-2 node-2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC }] + Mar 7 03:50:45.237: INFO: test-pod-3 node-2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:50:45 +0000 UTC }] + Mar 7 03:50:45.237: INFO: + Mar 7 03:50:47.245: INFO: 3 / 3 pods in namespace 'pods-9644' are running and ready (2 seconds elapsed) + Mar 7 03:50:47.245: INFO: expected 0 pod replicas in namespace 'pods-9644', 0 are Running and Ready. + STEP: waiting for all pods to be deleted 03/07/23 03:50:47.264 + Mar 7 03:50:47.268: INFO: Pod quantity 3 is different from expected quantity 0 + Mar 7 03:50:48.276: INFO: Pod quantity 3 is different from expected quantity 0 + Mar 7 03:50:49.273: INFO: Pod quantity 3 is different from expected quantity 0 + [AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 + Mar 7 03:50:50.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-9644" for this suite. 03/07/23 03:50:50.274 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:933 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:50.281 +Mar 7 03:50:50.281: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 03:50:50.282 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:50.294 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:50.297 +[It] should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:933 +STEP: Creating a ResourceQuota 03/07/23 03:50:50.299 +STEP: Getting a ResourceQuota 03/07/23 03:50:50.304 +STEP: Listing all ResourceQuotas with LabelSelector 03/07/23 03:50:50.306 +STEP: Patching the ResourceQuota 03/07/23 03:50:50.308 +STEP: Deleting a Collection of ResourceQuotas 03/07/23 03:50:50.314 +STEP: Verifying the deleted ResourceQuota 03/07/23 03:50:50.322 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 03:50:50.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3602" for this suite. 03/07/23 03:50:50.331 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]","completed":300,"skipped":5528,"failed":0} +------------------------------ +• [0.054 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:933 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:50.281 + Mar 7 03:50:50.281: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 03:50:50.282 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:50.294 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:50.297 + [It] should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:933 + STEP: Creating a ResourceQuota 03/07/23 03:50:50.299 + STEP: Getting a ResourceQuota 03/07/23 03:50:50.304 + STEP: Listing all ResourceQuotas with LabelSelector 03/07/23 03:50:50.306 + STEP: Patching the ResourceQuota 03/07/23 03:50:50.308 + STEP: Deleting a Collection of ResourceQuotas 03/07/23 03:50:50.314 + STEP: Verifying the deleted ResourceQuota 03/07/23 03:50:50.322 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 03:50:50.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-3602" for this suite. 03/07/23 03:50:50.331 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:194 +[BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:50:50.336 +Mar 7 03:50:50.336: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename job 03/07/23 03:50:50.337 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:50.355 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:50.357 +[It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:194 +STEP: Creating Indexed job 03/07/23 03:50:50.359 +STEP: Ensuring job reaches completions 03/07/23 03:50:50.365 +STEP: Ensuring pods with index for job exist 03/07/23 03:51:00.368 +[AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 +Mar 7 03:51:00.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-6565" for this suite. 03/07/23 03:51:00.374 +{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]","completed":301,"skipped":5552,"failed":0} +------------------------------ +• [SLOW TEST] [10.044 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:194 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:50:50.336 + Mar 7 03:50:50.336: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename job 03/07/23 03:50:50.337 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:50:50.355 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:50:50.357 + [It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:194 + STEP: Creating Indexed job 03/07/23 03:50:50.359 + STEP: Ensuring job reaches completions 03/07/23 03:50:50.365 + STEP: Ensuring pods with index for job exist 03/07/23 03:51:00.368 + [AfterEach] [sig-apps] Job + test/e2e/framework/framework.go:187 + Mar 7 03:51:00.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "job-6565" for this suite. 03/07/23 03:51:00.374 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:86 +[BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:00.38 +Mar 7 03:51:00.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename containers 03/07/23 03:51:00.381 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:00.396 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:00.398 +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:86 +STEP: Creating a pod to test override all 03/07/23 03:51:00.4 +Mar 7 03:51:00.405: INFO: Waiting up to 5m0s for pod "client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93" in namespace "containers-1833" to be "Succeeded or Failed" +Mar 7 03:51:00.408: INFO: Pod "client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.7355ms +Mar 7 03:51:02.415: INFO: Pod "client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009390057s +Mar 7 03:51:04.416: INFO: Pod "client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011095217s +STEP: Saw pod success 03/07/23 03:51:04.416 +Mar 7 03:51:04.416: INFO: Pod "client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93" satisfied condition "Succeeded or Failed" +Mar 7 03:51:04.419: INFO: Trying to get logs from node node-2 pod client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93 container agnhost-container: +STEP: delete the pod 03/07/23 03:51:04.424 +Mar 7 03:51:04.435: INFO: Waiting for pod client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93 to disappear +Mar 7 03:51:04.438: INFO: Pod client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93 no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:187 +Mar 7 03:51:04.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-1833" for this suite. 03/07/23 03:51:04.441 +{"msg":"PASSED [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","completed":302,"skipped":5555,"failed":0} +------------------------------ +• [4.065 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:86 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:00.38 + Mar 7 03:51:00.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename containers 03/07/23 03:51:00.381 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:00.396 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:00.398 + [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:86 + STEP: Creating a pod to test override all 03/07/23 03:51:00.4 + Mar 7 03:51:00.405: INFO: Waiting up to 5m0s for pod "client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93" in namespace "containers-1833" to be "Succeeded or Failed" + Mar 7 03:51:00.408: INFO: Pod "client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.7355ms + Mar 7 03:51:02.415: INFO: Pod "client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009390057s + Mar 7 03:51:04.416: INFO: Pod "client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011095217s + STEP: Saw pod success 03/07/23 03:51:04.416 + Mar 7 03:51:04.416: INFO: Pod "client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93" satisfied condition "Succeeded or Failed" + Mar 7 03:51:04.419: INFO: Trying to get logs from node node-2 pod client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93 container agnhost-container: + STEP: delete the pod 03/07/23 03:51:04.424 + Mar 7 03:51:04.435: INFO: Waiting for pod client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93 to disappear + Mar 7 03:51:04.438: INFO: Pod client-containers-88df4a8b-11b7-4615-aecc-7129616d0f93 no longer exists + [AfterEach] [sig-node] Containers + test/e2e/framework/framework.go:187 + Mar 7 03:51:04.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "containers-1833" for this suite. 03/07/23 03:51:04.441 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:852 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:04.446 +Mar 7 03:51:04.446: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:51:04.447 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:04.46 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:04.462 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:852 +STEP: creating service multi-endpoint-test in namespace services-3378 03/07/23 03:51:04.464 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3378 to expose endpoints map[] 03/07/23 03:51:04.474 +Mar 7 03:51:04.479: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found +Mar 7 03:51:05.486: INFO: successfully validated that service multi-endpoint-test in namespace services-3378 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-3378 03/07/23 03:51:05.486 +Mar 7 03:51:05.492: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-3378" to be "running and ready" +Mar 7 03:51:05.495: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739923ms +Mar 7 03:51:05.495: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:51:07.499: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.006696015s +Mar 7 03:51:07.499: INFO: The phase of Pod pod1 is Running (Ready = true) +Mar 7 03:51:07.499: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3378 to expose endpoints map[pod1:[100]] 03/07/23 03:51:07.501 +Mar 7 03:51:07.508: INFO: successfully validated that service multi-endpoint-test in namespace services-3378 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-3378 03/07/23 03:51:07.508 +Mar 7 03:51:07.512: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-3378" to be "running and ready" +Mar 7 03:51:07.514: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157469ms +Mar 7 03:51:07.514: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:51:09.517: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.005243537s +Mar 7 03:51:09.517: INFO: The phase of Pod pod2 is Running (Ready = true) +Mar 7 03:51:09.517: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3378 to expose endpoints map[pod1:[100] pod2:[101]] 03/07/23 03:51:09.519 +Mar 7 03:51:09.528: INFO: successfully validated that service multi-endpoint-test in namespace services-3378 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods 03/07/23 03:51:09.528 +Mar 7 03:51:09.528: INFO: Creating new exec pod +Mar 7 03:51:09.532: INFO: Waiting up to 5m0s for pod "execpodqnf7q" in namespace "services-3378" to be "running" +Mar 7 03:51:09.534: INFO: Pod "execpodqnf7q": Phase="Pending", Reason="", readiness=false. Elapsed: 1.746936ms +Mar 7 03:51:11.537: INFO: Pod "execpodqnf7q": Phase="Running", Reason="", readiness=true. Elapsed: 2.005247039s +Mar 7 03:51:11.537: INFO: Pod "execpodqnf7q" satisfied condition "running" +Mar 7 03:51:12.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3378 exec execpodqnf7q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' +Mar 7 03:51:12.745: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Mar 7 03:51:12.745: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:51:12.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3378 exec execpodqnf7q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.98.46.99 80' +Mar 7 03:51:12.922: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.98.46.99 80\nConnection to 10.98.46.99 80 port [tcp/http] succeeded!\n" +Mar 7 03:51:12.922: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:51:12.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3378 exec execpodqnf7q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' +Mar 7 03:51:13.109: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Mar 7 03:51:13.109: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:51:13.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3378 exec execpodqnf7q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.98.46.99 81' +Mar 7 03:51:13.315: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.98.46.99 81\nConnection to 10.98.46.99 81 port [tcp/*] succeeded!\n" +Mar 7 03:51:13.315: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-3378 03/07/23 03:51:13.315 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3378 to expose endpoints map[pod2:[101]] 03/07/23 03:51:13.358 +Mar 7 03:51:13.378: INFO: successfully validated that service multi-endpoint-test in namespace services-3378 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-3378 03/07/23 03:51:13.378 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3378 to expose endpoints map[] 03/07/23 03:51:13.41 +Mar 7 03:51:13.430: INFO: successfully validated that service multi-endpoint-test in namespace services-3378 exposes endpoints map[] +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:51:13.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-3378" for this suite. 03/07/23 03:51:13.465 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","completed":303,"skipped":5556,"failed":0} +------------------------------ +• [SLOW TEST] [9.031 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:852 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:04.446 + Mar 7 03:51:04.446: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:51:04.447 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:04.46 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:04.462 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:852 + STEP: creating service multi-endpoint-test in namespace services-3378 03/07/23 03:51:04.464 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3378 to expose endpoints map[] 03/07/23 03:51:04.474 + Mar 7 03:51:04.479: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found + Mar 7 03:51:05.486: INFO: successfully validated that service multi-endpoint-test in namespace services-3378 exposes endpoints map[] + STEP: Creating pod pod1 in namespace services-3378 03/07/23 03:51:05.486 + Mar 7 03:51:05.492: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-3378" to be "running and ready" + Mar 7 03:51:05.495: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739923ms + Mar 7 03:51:05.495: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:51:07.499: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.006696015s + Mar 7 03:51:07.499: INFO: The phase of Pod pod1 is Running (Ready = true) + Mar 7 03:51:07.499: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3378 to expose endpoints map[pod1:[100]] 03/07/23 03:51:07.501 + Mar 7 03:51:07.508: INFO: successfully validated that service multi-endpoint-test in namespace services-3378 exposes endpoints map[pod1:[100]] + STEP: Creating pod pod2 in namespace services-3378 03/07/23 03:51:07.508 + Mar 7 03:51:07.512: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-3378" to be "running and ready" + Mar 7 03:51:07.514: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157469ms + Mar 7 03:51:07.514: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:51:09.517: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.005243537s + Mar 7 03:51:09.517: INFO: The phase of Pod pod2 is Running (Ready = true) + Mar 7 03:51:09.517: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3378 to expose endpoints map[pod1:[100] pod2:[101]] 03/07/23 03:51:09.519 + Mar 7 03:51:09.528: INFO: successfully validated that service multi-endpoint-test in namespace services-3378 exposes endpoints map[pod1:[100] pod2:[101]] + STEP: Checking if the Service forwards traffic to pods 03/07/23 03:51:09.528 + Mar 7 03:51:09.528: INFO: Creating new exec pod + Mar 7 03:51:09.532: INFO: Waiting up to 5m0s for pod "execpodqnf7q" in namespace "services-3378" to be "running" + Mar 7 03:51:09.534: INFO: Pod "execpodqnf7q": Phase="Pending", Reason="", readiness=false. Elapsed: 1.746936ms + Mar 7 03:51:11.537: INFO: Pod "execpodqnf7q": Phase="Running", Reason="", readiness=true. Elapsed: 2.005247039s + Mar 7 03:51:11.537: INFO: Pod "execpodqnf7q" satisfied condition "running" + Mar 7 03:51:12.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3378 exec execpodqnf7q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' + Mar 7 03:51:12.745: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" + Mar 7 03:51:12.745: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:51:12.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3378 exec execpodqnf7q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.98.46.99 80' + Mar 7 03:51:12.922: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.98.46.99 80\nConnection to 10.98.46.99 80 port [tcp/http] succeeded!\n" + Mar 7 03:51:12.922: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:51:12.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3378 exec execpodqnf7q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' + Mar 7 03:51:13.109: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" + Mar 7 03:51:13.109: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:51:13.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-3378 exec execpodqnf7q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.98.46.99 81' + Mar 7 03:51:13.315: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.98.46.99 81\nConnection to 10.98.46.99 81 port [tcp/*] succeeded!\n" + Mar 7 03:51:13.315: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + STEP: Deleting pod pod1 in namespace services-3378 03/07/23 03:51:13.315 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3378 to expose endpoints map[pod2:[101]] 03/07/23 03:51:13.358 + Mar 7 03:51:13.378: INFO: successfully validated that service multi-endpoint-test in namespace services-3378 exposes endpoints map[pod2:[101]] + STEP: Deleting pod pod2 in namespace services-3378 03/07/23 03:51:13.378 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3378 to expose endpoints map[] 03/07/23 03:51:13.41 + Mar 7 03:51:13.430: INFO: successfully validated that service multi-endpoint-test in namespace services-3378 exposes endpoints map[] + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:51:13.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-3378" for this suite. 03/07/23 03:51:13.465 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:234 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:13.478 +Mar 7 03:51:13.479: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:51:13.48 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:13.5 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:13.502 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:234 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:51:13.504 +Mar 7 03:51:13.511: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3" in namespace "projected-1584" to be "Succeeded or Failed" +Mar 7 03:51:13.518: INFO: Pod "downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.348209ms +Mar 7 03:51:15.522: INFO: Pod "downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010757642s +Mar 7 03:51:17.520: INFO: Pod "downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009050661s +STEP: Saw pod success 03/07/23 03:51:17.52 +Mar 7 03:51:17.520: INFO: Pod "downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3" satisfied condition "Succeeded or Failed" +Mar 7 03:51:17.523: INFO: Trying to get logs from node node-2 pod downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3 container client-container: +STEP: delete the pod 03/07/23 03:51:17.527 +Mar 7 03:51:17.535: INFO: Waiting for pod downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3 to disappear +Mar 7 03:51:17.538: INFO: Pod downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 03:51:17.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1584" for this suite. 03/07/23 03:51:17.541 +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","completed":304,"skipped":5589,"failed":0} +------------------------------ +• [4.066 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:234 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:13.478 + Mar 7 03:51:13.479: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:51:13.48 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:13.5 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:13.502 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:234 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:51:13.504 + Mar 7 03:51:13.511: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3" in namespace "projected-1584" to be "Succeeded or Failed" + Mar 7 03:51:13.518: INFO: Pod "downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.348209ms + Mar 7 03:51:15.522: INFO: Pod "downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010757642s + Mar 7 03:51:17.520: INFO: Pod "downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009050661s + STEP: Saw pod success 03/07/23 03:51:17.52 + Mar 7 03:51:17.520: INFO: Pod "downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3" satisfied condition "Succeeded or Failed" + Mar 7 03:51:17.523: INFO: Trying to get logs from node node-2 pod downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3 container client-container: + STEP: delete the pod 03/07/23 03:51:17.527 + Mar 7 03:51:17.535: INFO: Waiting for pod downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3 to disappear + Mar 7 03:51:17.538: INFO: Pod downwardapi-volume-ba4a20f7-12a8-4161-a3fa-82cdbb88d7f3 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 03:51:17.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-1584" for this suite. 03/07/23 03:51:17.541 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:124 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:17.547 +Mar 7 03:51:17.548: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 03:51:17.548 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:17.56 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:17.562 +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:124 +STEP: Creating secret with name secret-test-d9075329-4ec5-4286-9b60-14f1534b4fb5 03/07/23 03:51:17.564 +STEP: Creating a pod to test consume secrets 03/07/23 03:51:17.566 +Mar 7 03:51:17.581: INFO: Waiting up to 5m0s for pod "pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61" in namespace "secrets-6280" to be "Succeeded or Failed" +Mar 7 03:51:17.584: INFO: Pod "pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61": Phase="Pending", Reason="", readiness=false. Elapsed: 3.230858ms +Mar 7 03:51:19.604: INFO: Pod "pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023456417s +Mar 7 03:51:21.588: INFO: Pod "pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007225089s +STEP: Saw pod success 03/07/23 03:51:21.588 +Mar 7 03:51:21.588: INFO: Pod "pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61" satisfied condition "Succeeded or Failed" +Mar 7 03:51:21.590: INFO: Trying to get logs from node node-2 pod pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61 container secret-volume-test: +STEP: delete the pod 03/07/23 03:51:21.605 +Mar 7 03:51:21.613: INFO: Waiting for pod pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61 to disappear +Mar 7 03:51:21.616: INFO: Pod pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 +Mar 7 03:51:21.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6280" for this suite. 03/07/23 03:51:21.622 +{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","completed":305,"skipped":5669,"failed":0} +------------------------------ +• [4.079 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:17.547 + Mar 7 03:51:17.548: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 03:51:17.548 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:17.56 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:17.562 + [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:124 + STEP: Creating secret with name secret-test-d9075329-4ec5-4286-9b60-14f1534b4fb5 03/07/23 03:51:17.564 + STEP: Creating a pod to test consume secrets 03/07/23 03:51:17.566 + Mar 7 03:51:17.581: INFO: Waiting up to 5m0s for pod "pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61" in namespace "secrets-6280" to be "Succeeded or Failed" + Mar 7 03:51:17.584: INFO: Pod "pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61": Phase="Pending", Reason="", readiness=false. Elapsed: 3.230858ms + Mar 7 03:51:19.604: INFO: Pod "pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023456417s + Mar 7 03:51:21.588: INFO: Pod "pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007225089s + STEP: Saw pod success 03/07/23 03:51:21.588 + Mar 7 03:51:21.588: INFO: Pod "pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61" satisfied condition "Succeeded or Failed" + Mar 7 03:51:21.590: INFO: Trying to get logs from node node-2 pod pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61 container secret-volume-test: + STEP: delete the pod 03/07/23 03:51:21.605 + Mar 7 03:51:21.613: INFO: Waiting for pod pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61 to disappear + Mar 7 03:51:21.616: INFO: Pod pod-secrets-2b83faea-fcc3-4726-bdde-63ba5ea75e61 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/framework.go:187 + Mar 7 03:51:21.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-6280" for this suite. 03/07/23 03:51:21.622 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:92 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:21.627 +Mar 7 03:51:21.627: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:51:21.628 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:21.644 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:21.646 +[It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:92 +STEP: Creating configMap configmap-9443/configmap-test-0fb689cd-ccea-4ce3-a29c-0baa669b4050 03/07/23 03:51:21.647 +STEP: Creating a pod to test consume configMaps 03/07/23 03:51:21.65 +Mar 7 03:51:21.656: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916" in namespace "configmap-9443" to be "Succeeded or Failed" +Mar 7 03:51:21.659: INFO: Pod "pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916": Phase="Pending", Reason="", readiness=false. Elapsed: 3.734751ms +Mar 7 03:51:23.664: INFO: Pod "pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916": Phase="Running", Reason="", readiness=false. Elapsed: 2.007907903s +Mar 7 03:51:25.663: INFO: Pod "pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007482835s +STEP: Saw pod success 03/07/23 03:51:25.663 +Mar 7 03:51:25.663: INFO: Pod "pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916" satisfied condition "Succeeded or Failed" +Mar 7 03:51:25.666: INFO: Trying to get logs from node node-2 pod pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916 container env-test: +STEP: delete the pod 03/07/23 03:51:25.671 +Mar 7 03:51:25.679: INFO: Waiting for pod pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916 to disappear +Mar 7 03:51:25.681: INFO: Pod pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916 no longer exists +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:51:25.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9443" for this suite. 03/07/23 03:51:25.684 +{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","completed":306,"skipped":5672,"failed":0} +------------------------------ +• [4.061 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:92 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:21.627 + Mar 7 03:51:21.627: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:51:21.628 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:21.644 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:21.646 + [It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:92 + STEP: Creating configMap configmap-9443/configmap-test-0fb689cd-ccea-4ce3-a29c-0baa669b4050 03/07/23 03:51:21.647 + STEP: Creating a pod to test consume configMaps 03/07/23 03:51:21.65 + Mar 7 03:51:21.656: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916" in namespace "configmap-9443" to be "Succeeded or Failed" + Mar 7 03:51:21.659: INFO: Pod "pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916": Phase="Pending", Reason="", readiness=false. Elapsed: 3.734751ms + Mar 7 03:51:23.664: INFO: Pod "pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916": Phase="Running", Reason="", readiness=false. Elapsed: 2.007907903s + Mar 7 03:51:25.663: INFO: Pod "pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007482835s + STEP: Saw pod success 03/07/23 03:51:25.663 + Mar 7 03:51:25.663: INFO: Pod "pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916" satisfied condition "Succeeded or Failed" + Mar 7 03:51:25.666: INFO: Trying to get logs from node node-2 pod pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916 container env-test: + STEP: delete the pod 03/07/23 03:51:25.671 + Mar 7 03:51:25.679: INFO: Waiting for pod pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916 to disappear + Mar 7 03:51:25.681: INFO: Pod pod-configmaps-a1aff684-d916-4696-9b5f-1272edb5f916 no longer exists + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:51:25.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-9443" for this suite. 03/07/23 03:51:25.684 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:326 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:25.693 +Mar 7 03:51:25.694: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename sched-pred 03/07/23 03:51:25.695 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:25.708 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:25.711 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 +Mar 7 03:51:25.714: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Mar 7 03:51:25.721: INFO: Waiting for terminating namespaces to be deleted... +Mar 7 03:51:25.725: INFO: +Logging pods the apiserver thinks is on node bootstrap before test +Mar 7 03:51:25.739: INFO: rs-rzqwv from disruption-9041 started at 2023-03-07 03:49:27 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container donothing ready: false, restart count 0 +Mar 7 03:51:25.739: INFO: apiserver-proxy-bootstrap from kube-system started at 2023-03-07 00:42:52 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container nginx ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: backup-747d8c577b-wdcvl from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container backup ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: backup-replication-wkdpp-lt4dt from kube-system started at 2023-03-07 00:47:50 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container backup-replication ready: false, restart count 0 +Mar 7 03:51:25.739: INFO: calico-kube-controllers-59685599d8-pvn74 from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container calico-kube-controllers ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: calico-node-mlncm from kube-system started at 2023-03-07 02:23:53 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container calico-node ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: coredns-5d7b997fcf-2j4jw from kube-system started at 2023-03-07 02:57:39 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container coredns ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: etcd-bootstrap from kube-system started at 2023-03-07 00:43:13 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container etcd ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: kube-apiserver-bootstrap from kube-system started at 2023-03-07 00:43:25 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: kube-controller-manager-bootstrap from kube-system started at 2023-03-07 00:43:33 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container kube-controller-manager ready: true, restart count 4 +Mar 7 03:51:25.739: INFO: kube-proxy-nlf5t from kube-system started at 2023-03-07 02:23:30 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: kube-scheduler-bootstrap from kube-system started at 2023-03-07 00:43:34 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container kube-scheduler ready: true, restart count 3 +Mar 7 03:51:25.739: INFO: metalk8s-operator-controller-manager-7d4764b947-crj2f from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container manager ready: true, restart count 5 +Mar 7 03:51:25.739: INFO: repositories-bootstrap from kube-system started at 2023-03-07 02:07:15 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container repositories ready: true, restart count 1 +Mar 7 03:51:25.739: INFO: salt-master-bootstrap from kube-system started at 2023-03-07 00:42:29 +0000 UTC (2 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container salt-api ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: Container salt-master ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: storage-operator-78f5dcc84f-jwnzl from kube-system started at 2023-03-07 00:45:28 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container manager ready: true, restart count 4 +Mar 7 03:51:25.739: INFO: dex-57f9db7c4-hbrhr from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container dex ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: dex-57f9db7c4-z6gh6 from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container dex ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: ingress-control-plane-managed-vip-n2qb6 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container keepalived ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: ingress-nginx-control-plane-controller-j9hsf from metalk8s-ingress started at 2023-03-07 00:45:27 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container controller ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: ingress-nginx-controller-vjnvw from metalk8s-ingress started at 2023-03-07 02:10:07 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container controller ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: ingress-nginx-defaultbackend-75c64bd745-65gwj from metalk8s-ingress started at 2023-03-07 00:45:24 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container ingress-nginx-default-backend ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: fluent-bit-dzhms from metalk8s-logging started at 2023-03-07 00:45:38 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: metalk8s-alert-logger-84f87c86d-hflm5 from metalk8s-monitoring started at 2023-03-07 00:45:09 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container metalk8s-alert-logger ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: prometheus-adapter-6696954b59-qrxtn from metalk8s-monitoring started at 2023-03-07 00:45:34 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container prometheus-adapter ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: prometheus-operator-kube-state-metrics-f7d5dc499-t4szw from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container kube-state-metrics ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: prometheus-operator-operator-864bc5b5d-8m6lq from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container prometheus-operator ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: prometheus-operator-prometheus-node-exporter-sl4bq from metalk8s-monitoring started at 2023-03-07 00:45:18 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: thanos-query-6b9dc579dd-ctlrl from metalk8s-monitoring started at 2023-03-07 00:45:22 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container thanos-query ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: metalk8s-ui-766c8b96cd-8cxcs from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container metalk8s-ui ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: metalk8s-ui-766c8b96cd-tsx5v from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container metalk8s-ui ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:51:25.739: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: Container systemd-logs ready: true, restart count 0 +Mar 7 03:51:25.739: INFO: +Logging pods the apiserver thinks is on node node-1 before test +Mar 7 03:51:25.751: INFO: apiserver-proxy-node-1 from kube-system started at 2023-03-07 00:58:52 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container nginx ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: calico-node-fvlp2 from kube-system started at 2023-03-07 02:23:42 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container calico-node ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: coredns-5d7b997fcf-z25jb from kube-system started at 2023-03-07 02:09:04 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container coredns ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: etcd-node-1 from kube-system started at 2023-03-07 00:59:16 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container etcd ready: true, restart count 1 +Mar 7 03:51:25.751: INFO: kube-apiserver-node-1 from kube-system started at 2023-03-07 01:00:05 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: kube-controller-manager-node-1 from kube-system started at 2023-03-07 01:00:17 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container kube-controller-manager ready: true, restart count 2 +Mar 7 03:51:25.751: INFO: kube-proxy-vpgsc from kube-system started at 2023-03-07 02:23:27 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: kube-scheduler-node-1 from kube-system started at 2023-03-07 01:00:18 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container kube-scheduler ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: ingress-control-plane-managed-vip-w2cb9 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container keepalived ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: ingress-nginx-control-plane-controller-ck4wk from metalk8s-ingress started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container controller ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: ingress-nginx-controller-9b2bj from metalk8s-ingress started at 2023-03-07 02:10:40 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container controller ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: fluent-bit-4nw7s from metalk8s-logging started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: loki-0 from metalk8s-logging started at 2023-03-07 01:11:45 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container single-binary ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: alertmanager-prometheus-operator-alertmanager-0 from metalk8s-monitoring started at 2023-03-07 01:11:00 +0000 UTC (2 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container alertmanager ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: Container config-reloader ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: prometheus-operator-grafana-74d86d5965-nj6pq from metalk8s-monitoring started at 2023-03-07 02:57:39 +0000 UTC (3 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container grafana ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: Container grafana-sc-dashboard ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: Container grafana-sc-datasources ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: prometheus-operator-prometheus-node-exporter-4plkr from metalk8s-monitoring started at 2023-03-07 00:58:56 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: prometheus-prometheus-operator-prometheus-0 from metalk8s-monitoring started at 2023-03-07 01:11:10 +0000 UTC (3 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container config-reloader ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: Container prometheus ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: Container thanos-sidecar ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-gktsb from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:51:25.751: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: Container systemd-logs ready: true, restart count 0 +Mar 7 03:51:25.751: INFO: +Logging pods the apiserver thinks is on node node-2 before test +Mar 7 03:51:25.761: INFO: apiserver-proxy-node-2 from kube-system started at 2023-03-07 01:07:13 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container nginx ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: calico-node-r7qqp from kube-system started at 2023-03-07 02:23:32 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container calico-node ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: etcd-node-2 from kube-system started at 2023-03-07 01:08:10 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container etcd ready: true, restart count 2 +Mar 7 03:51:25.761: INFO: kube-apiserver-node-2 from kube-system started at 2023-03-07 01:09:12 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container kube-apiserver ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: kube-controller-manager-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container kube-controller-manager ready: true, restart count 1 +Mar 7 03:51:25.761: INFO: kube-proxy-wsc86 from kube-system started at 2023-03-07 02:23:33 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container kube-proxy ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: kube-scheduler-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container kube-scheduler ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: ingress-control-plane-managed-vip-zqd2s from metalk8s-ingress started at 2023-03-07 03:21:39 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container keepalived ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: ingress-nginx-control-plane-controller-vqfvn from metalk8s-ingress started at 2023-03-07 03:21:47 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container controller ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: ingress-nginx-controller-bp2qx from metalk8s-ingress started at 2023-03-07 03:21:47 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container controller ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: fluent-bit-dsjnx from metalk8s-logging started at 2023-03-07 03:21:37 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container fluent-bit ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: prometheus-operator-prometheus-node-exporter-6k5g9 from metalk8s-monitoring started at 2023-03-07 03:21:36 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container node-exporter ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: sonobuoy from sonobuoy started at 2023-03-07 02:24:57 +0000 UTC (1 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container kube-sonobuoy ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: sonobuoy-e2e-job-441ced38a9a5443b from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container e2e ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-hbmvq from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) +Mar 7 03:51:25.761: INFO: Container sonobuoy-worker ready: true, restart count 0 +Mar 7 03:51:25.761: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:326 +STEP: verifying the node has the label node bootstrap 03/07/23 03:51:25.788 +STEP: verifying the node has the label node node-1 03/07/23 03:51:25.804 +STEP: verifying the node has the label node node-2 03/07/23 03:51:25.821 +Mar 7 03:51:25.853: INFO: Pod rs-rzqwv requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod apiserver-proxy-bootstrap requesting resource cpu=25m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod apiserver-proxy-node-1 requesting resource cpu=25m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod apiserver-proxy-node-2 requesting resource cpu=25m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod backup-747d8c577b-wdcvl requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod calico-kube-controllers-59685599d8-pvn74 requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod calico-node-fvlp2 requesting resource cpu=250m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod calico-node-mlncm requesting resource cpu=250m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod calico-node-r7qqp requesting resource cpu=250m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod coredns-5d7b997fcf-2j4jw requesting resource cpu=100m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod coredns-5d7b997fcf-z25jb requesting resource cpu=100m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod etcd-bootstrap requesting resource cpu=100m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod etcd-node-1 requesting resource cpu=100m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod etcd-node-2 requesting resource cpu=100m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod kube-apiserver-bootstrap requesting resource cpu=250m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod kube-apiserver-node-1 requesting resource cpu=250m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod kube-apiserver-node-2 requesting resource cpu=250m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod kube-controller-manager-bootstrap requesting resource cpu=200m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod kube-controller-manager-node-1 requesting resource cpu=200m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod kube-controller-manager-node-2 requesting resource cpu=200m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod kube-proxy-nlf5t requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod kube-proxy-vpgsc requesting resource cpu=0m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod kube-proxy-wsc86 requesting resource cpu=0m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod kube-scheduler-bootstrap requesting resource cpu=100m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod kube-scheduler-node-1 requesting resource cpu=100m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod kube-scheduler-node-2 requesting resource cpu=100m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod metalk8s-operator-controller-manager-7d4764b947-crj2f requesting resource cpu=10m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod repositories-bootstrap requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod salt-master-bootstrap requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod storage-operator-78f5dcc84f-jwnzl requesting resource cpu=100m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod dex-57f9db7c4-hbrhr requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod dex-57f9db7c4-z6gh6 requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod ingress-control-plane-managed-vip-n2qb6 requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod ingress-control-plane-managed-vip-w2cb9 requesting resource cpu=0m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod ingress-control-plane-managed-vip-zqd2s requesting resource cpu=0m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod ingress-nginx-control-plane-controller-ck4wk requesting resource cpu=100m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod ingress-nginx-control-plane-controller-j9hsf requesting resource cpu=100m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod ingress-nginx-control-plane-controller-vqfvn requesting resource cpu=100m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod ingress-nginx-controller-9b2bj requesting resource cpu=100m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod ingress-nginx-controller-bp2qx requesting resource cpu=100m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod ingress-nginx-controller-vjnvw requesting resource cpu=100m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod ingress-nginx-defaultbackend-75c64bd745-65gwj requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod fluent-bit-4nw7s requesting resource cpu=100m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod fluent-bit-dsjnx requesting resource cpu=100m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod fluent-bit-dzhms requesting resource cpu=100m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod loki-0 requesting resource cpu=0m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod alertmanager-prometheus-operator-alertmanager-0 requesting resource cpu=200m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod metalk8s-alert-logger-84f87c86d-hflm5 requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod prometheus-adapter-6696954b59-qrxtn requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod prometheus-operator-grafana-74d86d5965-nj6pq requesting resource cpu=0m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod prometheus-operator-kube-state-metrics-f7d5dc499-t4szw requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod prometheus-operator-operator-864bc5b5d-8m6lq requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod prometheus-operator-prometheus-node-exporter-4plkr requesting resource cpu=0m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod prometheus-operator-prometheus-node-exporter-6k5g9 requesting resource cpu=0m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod prometheus-operator-prometheus-node-exporter-sl4bq requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod prometheus-prometheus-operator-prometheus-0 requesting resource cpu=200m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod thanos-query-6b9dc579dd-ctlrl requesting resource cpu=0m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod metalk8s-ui-766c8b96cd-8cxcs requesting resource cpu=100m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod metalk8s-ui-766c8b96cd-tsx5v requesting resource cpu=100m on Node bootstrap +Mar 7 03:51:25.853: INFO: Pod sonobuoy requesting resource cpu=0m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod sonobuoy-e2e-job-441ced38a9a5443b requesting resource cpu=0m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-gktsb requesting resource cpu=0m on Node node-1 +Mar 7 03:51:25.853: INFO: Pod sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-hbmvq requesting resource cpu=0m on Node node-2 +Mar 7 03:51:25.853: INFO: Pod sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz requesting resource cpu=0m on Node bootstrap +STEP: Starting Pods to consume most of the cluster CPU. 03/07/23 03:51:25.853 +Mar 7 03:51:25.853: INFO: Creating a pod which consumes cpu=1655m on Node bootstrap +Mar 7 03:51:25.867: INFO: Creating a pod which consumes cpu=1592m on Node node-1 +Mar 7 03:51:25.875: INFO: Creating a pod which consumes cpu=1942m on Node node-2 +Mar 7 03:51:25.880: INFO: Waiting up to 5m0s for pod "filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d" in namespace "sched-pred-9976" to be "running" +Mar 7 03:51:25.890: INFO: Pod "filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.936159ms +Mar 7 03:51:27.897: INFO: Pod "filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d": Phase="Running", Reason="", readiness=true. Elapsed: 2.016892669s +Mar 7 03:51:27.897: INFO: Pod "filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d" satisfied condition "running" +Mar 7 03:51:27.897: INFO: Waiting up to 5m0s for pod "filler-pod-75e42987-fd49-47e3-affd-559476025bad" in namespace "sched-pred-9976" to be "running" +Mar 7 03:51:27.900: INFO: Pod "filler-pod-75e42987-fd49-47e3-affd-559476025bad": Phase="Running", Reason="", readiness=true. Elapsed: 3.051771ms +Mar 7 03:51:27.900: INFO: Pod "filler-pod-75e42987-fd49-47e3-affd-559476025bad" satisfied condition "running" +Mar 7 03:51:27.900: INFO: Waiting up to 5m0s for pod "filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5" in namespace "sched-pred-9976" to be "running" +Mar 7 03:51:27.902: INFO: Pod "filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5": Phase="Running", Reason="", readiness=true. Elapsed: 1.919787ms +Mar 7 03:51:27.902: INFO: Pod "filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5" satisfied condition "running" +STEP: Creating another pod that requires unavailable amount of CPU. 03/07/23 03:51:27.902 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d.174a069dc769278e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9976/filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d to bootstrap] 03/07/23 03:51:27.905 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d.174a069dfb75e6bf], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.8" already present on machine] 03/07/23 03:51:27.905 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d.174a069dfd2fffef], Reason = [Created], Message = [Created container filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d] 03/07/23 03:51:27.906 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d.174a069e0927f426], Reason = [Started], Message = [Started container filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d] 03/07/23 03:51:27.906 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-75e42987-fd49-47e3-affd-559476025bad.174a069dc8b4f36f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9976/filler-pod-75e42987-fd49-47e3-affd-559476025bad to node-1] 03/07/23 03:51:27.906 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-75e42987-fd49-47e3-affd-559476025bad.174a069df10560b7], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.8" already present on machine] 03/07/23 03:51:27.906 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-75e42987-fd49-47e3-affd-559476025bad.174a069df25ffaf4], Reason = [Created], Message = [Created container filler-pod-75e42987-fd49-47e3-affd-559476025bad] 03/07/23 03:51:27.906 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-75e42987-fd49-47e3-affd-559476025bad.174a069dfa835e9b], Reason = [Started], Message = [Started container filler-pod-75e42987-fd49-47e3-affd-559476025bad] 03/07/23 03:51:27.906 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5.174a069dc8b7c3da], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9976/filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5 to node-2] 03/07/23 03:51:27.906 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5.174a069df518cbca], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.8" already present on machine] 03/07/23 03:51:27.906 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5.174a069df6a79c27], Reason = [Created], Message = [Created container filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5] 03/07/23 03:51:27.906 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5.174a069dff48fbd3], Reason = [Started], Message = [Started container filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5] 03/07/23 03:51:27.906 +STEP: Considering event: +Type = [Warning], Name = [additional-pod.174a069e40d9872c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient cpu. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.] 03/07/23 03:51:27.921 +STEP: removing the label node off the node bootstrap 03/07/23 03:51:28.915 +STEP: verifying the node doesn't have the label node 03/07/23 03:51:28.928 +STEP: removing the label node off the node node-1 03/07/23 03:51:28.932 +STEP: verifying the node doesn't have the label node 03/07/23 03:51:28.949 +STEP: removing the label node off the node node-2 03/07/23 03:51:28.954 +STEP: verifying the node doesn't have the label node 03/07/23 03:51:28.97 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:51:28.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-9976" for this suite. 03/07/23 03:51:28.981 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","completed":307,"skipped":5693,"failed":0} +------------------------------ +• [3.293 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:326 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:25.693 + Mar 7 03:51:25.694: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename sched-pred 03/07/23 03:51:25.695 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:25.708 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:25.711 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:92 + Mar 7 03:51:25.714: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Mar 7 03:51:25.721: INFO: Waiting for terminating namespaces to be deleted... + Mar 7 03:51:25.725: INFO: + Logging pods the apiserver thinks is on node bootstrap before test + Mar 7 03:51:25.739: INFO: rs-rzqwv from disruption-9041 started at 2023-03-07 03:49:27 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container donothing ready: false, restart count 0 + Mar 7 03:51:25.739: INFO: apiserver-proxy-bootstrap from kube-system started at 2023-03-07 00:42:52 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container nginx ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: backup-747d8c577b-wdcvl from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container backup ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: backup-replication-wkdpp-lt4dt from kube-system started at 2023-03-07 00:47:50 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container backup-replication ready: false, restart count 0 + Mar 7 03:51:25.739: INFO: calico-kube-controllers-59685599d8-pvn74 from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container calico-kube-controllers ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: calico-node-mlncm from kube-system started at 2023-03-07 02:23:53 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container calico-node ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: coredns-5d7b997fcf-2j4jw from kube-system started at 2023-03-07 02:57:39 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container coredns ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: etcd-bootstrap from kube-system started at 2023-03-07 00:43:13 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container etcd ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: kube-apiserver-bootstrap from kube-system started at 2023-03-07 00:43:25 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: kube-controller-manager-bootstrap from kube-system started at 2023-03-07 00:43:33 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container kube-controller-manager ready: true, restart count 4 + Mar 7 03:51:25.739: INFO: kube-proxy-nlf5t from kube-system started at 2023-03-07 02:23:30 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: kube-scheduler-bootstrap from kube-system started at 2023-03-07 00:43:34 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container kube-scheduler ready: true, restart count 3 + Mar 7 03:51:25.739: INFO: metalk8s-operator-controller-manager-7d4764b947-crj2f from kube-system started at 2023-03-07 00:44:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container manager ready: true, restart count 5 + Mar 7 03:51:25.739: INFO: repositories-bootstrap from kube-system started at 2023-03-07 02:07:15 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container repositories ready: true, restart count 1 + Mar 7 03:51:25.739: INFO: salt-master-bootstrap from kube-system started at 2023-03-07 00:42:29 +0000 UTC (2 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container salt-api ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: Container salt-master ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: storage-operator-78f5dcc84f-jwnzl from kube-system started at 2023-03-07 00:45:28 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container manager ready: true, restart count 4 + Mar 7 03:51:25.739: INFO: dex-57f9db7c4-hbrhr from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container dex ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: dex-57f9db7c4-z6gh6 from metalk8s-auth started at 2023-03-07 02:13:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container dex ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: ingress-control-plane-managed-vip-n2qb6 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container keepalived ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: ingress-nginx-control-plane-controller-j9hsf from metalk8s-ingress started at 2023-03-07 00:45:27 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container controller ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: ingress-nginx-controller-vjnvw from metalk8s-ingress started at 2023-03-07 02:10:07 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container controller ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: ingress-nginx-defaultbackend-75c64bd745-65gwj from metalk8s-ingress started at 2023-03-07 00:45:24 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container ingress-nginx-default-backend ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: fluent-bit-dzhms from metalk8s-logging started at 2023-03-07 00:45:38 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: metalk8s-alert-logger-84f87c86d-hflm5 from metalk8s-monitoring started at 2023-03-07 00:45:09 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container metalk8s-alert-logger ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: prometheus-adapter-6696954b59-qrxtn from metalk8s-monitoring started at 2023-03-07 00:45:34 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container prometheus-adapter ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: prometheus-operator-kube-state-metrics-f7d5dc499-t4szw from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container kube-state-metrics ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: prometheus-operator-operator-864bc5b5d-8m6lq from metalk8s-monitoring started at 2023-03-07 00:45:19 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container prometheus-operator ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: prometheus-operator-prometheus-node-exporter-sl4bq from metalk8s-monitoring started at 2023-03-07 00:45:18 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: thanos-query-6b9dc579dd-ctlrl from metalk8s-monitoring started at 2023-03-07 00:45:22 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container thanos-query ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: metalk8s-ui-766c8b96cd-8cxcs from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container metalk8s-ui ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: metalk8s-ui-766c8b96cd-tsx5v from metalk8s-ui started at 2023-03-07 00:45:30 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container metalk8s-ui ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:51:25.739: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: Container systemd-logs ready: true, restart count 0 + Mar 7 03:51:25.739: INFO: + Logging pods the apiserver thinks is on node node-1 before test + Mar 7 03:51:25.751: INFO: apiserver-proxy-node-1 from kube-system started at 2023-03-07 00:58:52 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container nginx ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: calico-node-fvlp2 from kube-system started at 2023-03-07 02:23:42 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container calico-node ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: coredns-5d7b997fcf-z25jb from kube-system started at 2023-03-07 02:09:04 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container coredns ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: etcd-node-1 from kube-system started at 2023-03-07 00:59:16 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container etcd ready: true, restart count 1 + Mar 7 03:51:25.751: INFO: kube-apiserver-node-1 from kube-system started at 2023-03-07 01:00:05 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: kube-controller-manager-node-1 from kube-system started at 2023-03-07 01:00:17 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container kube-controller-manager ready: true, restart count 2 + Mar 7 03:51:25.751: INFO: kube-proxy-vpgsc from kube-system started at 2023-03-07 02:23:27 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: kube-scheduler-node-1 from kube-system started at 2023-03-07 01:00:18 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container kube-scheduler ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: ingress-control-plane-managed-vip-w2cb9 from metalk8s-ingress started at 2023-03-07 02:05:37 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container keepalived ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: ingress-nginx-control-plane-controller-ck4wk from metalk8s-ingress started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container controller ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: ingress-nginx-controller-9b2bj from metalk8s-ingress started at 2023-03-07 02:10:40 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container controller ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: fluent-bit-4nw7s from metalk8s-logging started at 2023-03-07 00:59:58 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: loki-0 from metalk8s-logging started at 2023-03-07 01:11:45 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container single-binary ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: alertmanager-prometheus-operator-alertmanager-0 from metalk8s-monitoring started at 2023-03-07 01:11:00 +0000 UTC (2 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container alertmanager ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: Container config-reloader ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: prometheus-operator-grafana-74d86d5965-nj6pq from metalk8s-monitoring started at 2023-03-07 02:57:39 +0000 UTC (3 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container grafana ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: Container grafana-sc-dashboard ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: Container grafana-sc-datasources ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: prometheus-operator-prometheus-node-exporter-4plkr from metalk8s-monitoring started at 2023-03-07 00:58:56 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: prometheus-prometheus-operator-prometheus-0 from metalk8s-monitoring started at 2023-03-07 01:11:10 +0000 UTC (3 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container config-reloader ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: Container prometheus ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: Container thanos-sidecar ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-gktsb from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:51:25.751: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: Container systemd-logs ready: true, restart count 0 + Mar 7 03:51:25.751: INFO: + Logging pods the apiserver thinks is on node node-2 before test + Mar 7 03:51:25.761: INFO: apiserver-proxy-node-2 from kube-system started at 2023-03-07 01:07:13 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container nginx ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: calico-node-r7qqp from kube-system started at 2023-03-07 02:23:32 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container calico-node ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: etcd-node-2 from kube-system started at 2023-03-07 01:08:10 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container etcd ready: true, restart count 2 + Mar 7 03:51:25.761: INFO: kube-apiserver-node-2 from kube-system started at 2023-03-07 01:09:12 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container kube-apiserver ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: kube-controller-manager-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container kube-controller-manager ready: true, restart count 1 + Mar 7 03:51:25.761: INFO: kube-proxy-wsc86 from kube-system started at 2023-03-07 02:23:33 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container kube-proxy ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: kube-scheduler-node-2 from kube-system started at 2023-03-07 01:09:23 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container kube-scheduler ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: ingress-control-plane-managed-vip-zqd2s from metalk8s-ingress started at 2023-03-07 03:21:39 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container keepalived ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: ingress-nginx-control-plane-controller-vqfvn from metalk8s-ingress started at 2023-03-07 03:21:47 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container controller ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: ingress-nginx-controller-bp2qx from metalk8s-ingress started at 2023-03-07 03:21:47 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container controller ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: fluent-bit-dsjnx from metalk8s-logging started at 2023-03-07 03:21:37 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container fluent-bit ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: prometheus-operator-prometheus-node-exporter-6k5g9 from metalk8s-monitoring started at 2023-03-07 03:21:36 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container node-exporter ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: sonobuoy from sonobuoy started at 2023-03-07 02:24:57 +0000 UTC (1 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container kube-sonobuoy ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: sonobuoy-e2e-job-441ced38a9a5443b from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container e2e ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-hbmvq from sonobuoy started at 2023-03-07 02:25:01 +0000 UTC (2 container statuses recorded) + Mar 7 03:51:25.761: INFO: Container sonobuoy-worker ready: true, restart count 0 + Mar 7 03:51:25.761: INFO: Container systemd-logs ready: true, restart count 0 + [It] validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:326 + STEP: verifying the node has the label node bootstrap 03/07/23 03:51:25.788 + STEP: verifying the node has the label node node-1 03/07/23 03:51:25.804 + STEP: verifying the node has the label node node-2 03/07/23 03:51:25.821 + Mar 7 03:51:25.853: INFO: Pod rs-rzqwv requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod apiserver-proxy-bootstrap requesting resource cpu=25m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod apiserver-proxy-node-1 requesting resource cpu=25m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod apiserver-proxy-node-2 requesting resource cpu=25m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod backup-747d8c577b-wdcvl requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod calico-kube-controllers-59685599d8-pvn74 requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod calico-node-fvlp2 requesting resource cpu=250m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod calico-node-mlncm requesting resource cpu=250m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod calico-node-r7qqp requesting resource cpu=250m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod coredns-5d7b997fcf-2j4jw requesting resource cpu=100m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod coredns-5d7b997fcf-z25jb requesting resource cpu=100m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod etcd-bootstrap requesting resource cpu=100m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod etcd-node-1 requesting resource cpu=100m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod etcd-node-2 requesting resource cpu=100m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod kube-apiserver-bootstrap requesting resource cpu=250m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod kube-apiserver-node-1 requesting resource cpu=250m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod kube-apiserver-node-2 requesting resource cpu=250m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod kube-controller-manager-bootstrap requesting resource cpu=200m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod kube-controller-manager-node-1 requesting resource cpu=200m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod kube-controller-manager-node-2 requesting resource cpu=200m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod kube-proxy-nlf5t requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod kube-proxy-vpgsc requesting resource cpu=0m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod kube-proxy-wsc86 requesting resource cpu=0m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod kube-scheduler-bootstrap requesting resource cpu=100m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod kube-scheduler-node-1 requesting resource cpu=100m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod kube-scheduler-node-2 requesting resource cpu=100m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod metalk8s-operator-controller-manager-7d4764b947-crj2f requesting resource cpu=10m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod repositories-bootstrap requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod salt-master-bootstrap requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod storage-operator-78f5dcc84f-jwnzl requesting resource cpu=100m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod dex-57f9db7c4-hbrhr requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod dex-57f9db7c4-z6gh6 requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod ingress-control-plane-managed-vip-n2qb6 requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod ingress-control-plane-managed-vip-w2cb9 requesting resource cpu=0m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod ingress-control-plane-managed-vip-zqd2s requesting resource cpu=0m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod ingress-nginx-control-plane-controller-ck4wk requesting resource cpu=100m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod ingress-nginx-control-plane-controller-j9hsf requesting resource cpu=100m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod ingress-nginx-control-plane-controller-vqfvn requesting resource cpu=100m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod ingress-nginx-controller-9b2bj requesting resource cpu=100m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod ingress-nginx-controller-bp2qx requesting resource cpu=100m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod ingress-nginx-controller-vjnvw requesting resource cpu=100m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod ingress-nginx-defaultbackend-75c64bd745-65gwj requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod fluent-bit-4nw7s requesting resource cpu=100m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod fluent-bit-dsjnx requesting resource cpu=100m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod fluent-bit-dzhms requesting resource cpu=100m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod loki-0 requesting resource cpu=0m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod alertmanager-prometheus-operator-alertmanager-0 requesting resource cpu=200m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod metalk8s-alert-logger-84f87c86d-hflm5 requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod prometheus-adapter-6696954b59-qrxtn requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod prometheus-operator-grafana-74d86d5965-nj6pq requesting resource cpu=0m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod prometheus-operator-kube-state-metrics-f7d5dc499-t4szw requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod prometheus-operator-operator-864bc5b5d-8m6lq requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod prometheus-operator-prometheus-node-exporter-4plkr requesting resource cpu=0m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod prometheus-operator-prometheus-node-exporter-6k5g9 requesting resource cpu=0m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod prometheus-operator-prometheus-node-exporter-sl4bq requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod prometheus-prometheus-operator-prometheus-0 requesting resource cpu=200m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod thanos-query-6b9dc579dd-ctlrl requesting resource cpu=0m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod metalk8s-ui-766c8b96cd-8cxcs requesting resource cpu=100m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod metalk8s-ui-766c8b96cd-tsx5v requesting resource cpu=100m on Node bootstrap + Mar 7 03:51:25.853: INFO: Pod sonobuoy requesting resource cpu=0m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod sonobuoy-e2e-job-441ced38a9a5443b requesting resource cpu=0m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-gktsb requesting resource cpu=0m on Node node-1 + Mar 7 03:51:25.853: INFO: Pod sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-hbmvq requesting resource cpu=0m on Node node-2 + Mar 7 03:51:25.853: INFO: Pod sonobuoy-systemd-logs-daemon-set-ca72222986ab487c-t8mkz requesting resource cpu=0m on Node bootstrap + STEP: Starting Pods to consume most of the cluster CPU. 03/07/23 03:51:25.853 + Mar 7 03:51:25.853: INFO: Creating a pod which consumes cpu=1655m on Node bootstrap + Mar 7 03:51:25.867: INFO: Creating a pod which consumes cpu=1592m on Node node-1 + Mar 7 03:51:25.875: INFO: Creating a pod which consumes cpu=1942m on Node node-2 + Mar 7 03:51:25.880: INFO: Waiting up to 5m0s for pod "filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d" in namespace "sched-pred-9976" to be "running" + Mar 7 03:51:25.890: INFO: Pod "filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.936159ms + Mar 7 03:51:27.897: INFO: Pod "filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d": Phase="Running", Reason="", readiness=true. Elapsed: 2.016892669s + Mar 7 03:51:27.897: INFO: Pod "filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d" satisfied condition "running" + Mar 7 03:51:27.897: INFO: Waiting up to 5m0s for pod "filler-pod-75e42987-fd49-47e3-affd-559476025bad" in namespace "sched-pred-9976" to be "running" + Mar 7 03:51:27.900: INFO: Pod "filler-pod-75e42987-fd49-47e3-affd-559476025bad": Phase="Running", Reason="", readiness=true. Elapsed: 3.051771ms + Mar 7 03:51:27.900: INFO: Pod "filler-pod-75e42987-fd49-47e3-affd-559476025bad" satisfied condition "running" + Mar 7 03:51:27.900: INFO: Waiting up to 5m0s for pod "filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5" in namespace "sched-pred-9976" to be "running" + Mar 7 03:51:27.902: INFO: Pod "filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5": Phase="Running", Reason="", readiness=true. Elapsed: 1.919787ms + Mar 7 03:51:27.902: INFO: Pod "filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5" satisfied condition "running" + STEP: Creating another pod that requires unavailable amount of CPU. 03/07/23 03:51:27.902 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d.174a069dc769278e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9976/filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d to bootstrap] 03/07/23 03:51:27.905 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d.174a069dfb75e6bf], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.8" already present on machine] 03/07/23 03:51:27.905 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d.174a069dfd2fffef], Reason = [Created], Message = [Created container filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d] 03/07/23 03:51:27.906 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d.174a069e0927f426], Reason = [Started], Message = [Started container filler-pod-5ab73da9-faa2-46b3-bc27-8b7a900a000d] 03/07/23 03:51:27.906 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-75e42987-fd49-47e3-affd-559476025bad.174a069dc8b4f36f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9976/filler-pod-75e42987-fd49-47e3-affd-559476025bad to node-1] 03/07/23 03:51:27.906 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-75e42987-fd49-47e3-affd-559476025bad.174a069df10560b7], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.8" already present on machine] 03/07/23 03:51:27.906 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-75e42987-fd49-47e3-affd-559476025bad.174a069df25ffaf4], Reason = [Created], Message = [Created container filler-pod-75e42987-fd49-47e3-affd-559476025bad] 03/07/23 03:51:27.906 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-75e42987-fd49-47e3-affd-559476025bad.174a069dfa835e9b], Reason = [Started], Message = [Started container filler-pod-75e42987-fd49-47e3-affd-559476025bad] 03/07/23 03:51:27.906 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5.174a069dc8b7c3da], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9976/filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5 to node-2] 03/07/23 03:51:27.906 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5.174a069df518cbca], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.8" already present on machine] 03/07/23 03:51:27.906 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5.174a069df6a79c27], Reason = [Created], Message = [Created container filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5] 03/07/23 03:51:27.906 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5.174a069dff48fbd3], Reason = [Started], Message = [Started container filler-pod-9ebabaf2-137f-4544-ab27-02d14a735ea5] 03/07/23 03:51:27.906 + STEP: Considering event: + Type = [Warning], Name = [additional-pod.174a069e40d9872c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient cpu. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.] 03/07/23 03:51:27.921 + STEP: removing the label node off the node bootstrap 03/07/23 03:51:28.915 + STEP: verifying the node doesn't have the label node 03/07/23 03:51:28.928 + STEP: removing the label node off the node node-1 03/07/23 03:51:28.932 + STEP: verifying the node doesn't have the label node 03/07/23 03:51:28.949 + STEP: removing the label node off the node node-2 03/07/23 03:51:28.954 + STEP: verifying the node doesn't have the label node 03/07/23 03:51:28.97 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:51:28.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "sched-pred-9976" for this suite. 03/07/23 03:51:28.981 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:83 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:234 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:28.988 +Mar 7 03:51:28.988: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 03:51:28.993 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:29.007 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:29.012 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:234 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:51:29.015 +Mar 7 03:51:29.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267" in namespace "downward-api-6147" to be "Succeeded or Failed" +Mar 7 03:51:29.034: INFO: Pod "downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267": Phase="Pending", Reason="", readiness=false. Elapsed: 11.673768ms +Mar 7 03:51:31.038: INFO: Pod "downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015791474s +Mar 7 03:51:33.038: INFO: Pod "downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01501546s +STEP: Saw pod success 03/07/23 03:51:33.038 +Mar 7 03:51:33.038: INFO: Pod "downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267" satisfied condition "Succeeded or Failed" +Mar 7 03:51:33.040: INFO: Trying to get logs from node node-2 pod downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267 container client-container: +STEP: delete the pod 03/07/23 03:51:33.045 +Mar 7 03:51:33.054: INFO: Waiting for pod downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267 to disappear +Mar 7 03:51:33.056: INFO: Pod downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 03:51:33.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6147" for this suite. 03/07/23 03:51:33.059 +{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","completed":308,"skipped":5715,"failed":0} +------------------------------ +• [4.077 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:234 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:28.988 + Mar 7 03:51:28.988: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 03:51:28.993 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:29.007 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:29.012 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:234 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:51:29.015 + Mar 7 03:51:29.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267" in namespace "downward-api-6147" to be "Succeeded or Failed" + Mar 7 03:51:29.034: INFO: Pod "downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267": Phase="Pending", Reason="", readiness=false. Elapsed: 11.673768ms + Mar 7 03:51:31.038: INFO: Pod "downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015791474s + Mar 7 03:51:33.038: INFO: Pod "downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01501546s + STEP: Saw pod success 03/07/23 03:51:33.038 + Mar 7 03:51:33.038: INFO: Pod "downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267" satisfied condition "Succeeded or Failed" + Mar 7 03:51:33.040: INFO: Trying to get logs from node node-2 pod downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267 container client-container: + STEP: delete the pod 03/07/23 03:51:33.045 + Mar 7 03:51:33.054: INFO: Waiting for pod downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267 to disappear + Mar 7 03:51:33.056: INFO: Pod downwardapi-volume-5d5093a7-2393-4519-b3b7-69509873e267 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 03:51:33.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-6147" for this suite. 03/07/23 03:51:33.059 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:33.068 +Mar 7 03:51:33.068: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename podtemplate 03/07/23 03:51:33.068 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:33.082 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:33.084 +[It] should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 +STEP: Create a pod template 03/07/23 03:51:33.086 +STEP: Replace a pod template 03/07/23 03:51:33.091 +Mar 7 03:51:33.097: INFO: Found updated podtemplate annotation: "true" + +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:187 +Mar 7 03:51:33.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-2084" for this suite. 03/07/23 03:51:33.1 +{"msg":"PASSED [sig-node] PodTemplates should replace a pod template [Conformance]","completed":309,"skipped":5790,"failed":0} +------------------------------ +• [0.038 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:33.068 + Mar 7 03:51:33.068: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename podtemplate 03/07/23 03:51:33.068 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:33.082 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:33.084 + [It] should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 + STEP: Create a pod template 03/07/23 03:51:33.086 + STEP: Replace a pod template 03/07/23 03:51:33.091 + Mar 7 03:51:33.097: INFO: Found updated podtemplate annotation: "true" + + [AfterEach] [sig-node] PodTemplates + test/e2e/framework/framework.go:187 + Mar 7 03:51:33.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "podtemplate-2084" for this suite. 03/07/23 03:51:33.1 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:88 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:33.106 +Mar 7 03:51:33.106: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:51:33.106 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:33.119 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:33.121 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:88 +STEP: Creating configMap with name configmap-test-volume-map-020076b6-a90f-4eb1-848d-cb6ff28ed485 03/07/23 03:51:33.129 +STEP: Creating a pod to test consume configMaps 03/07/23 03:51:33.132 +Mar 7 03:51:33.138: INFO: Waiting up to 5m0s for pod "pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583" in namespace "configmap-2346" to be "Succeeded or Failed" +Mar 7 03:51:33.141: INFO: Pod "pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260456ms +Mar 7 03:51:35.144: INFO: Pod "pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00527033s +Mar 7 03:51:37.144: INFO: Pod "pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006059998s +STEP: Saw pod success 03/07/23 03:51:37.144 +Mar 7 03:51:37.145: INFO: Pod "pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583" satisfied condition "Succeeded or Failed" +Mar 7 03:51:37.147: INFO: Trying to get logs from node node-2 pod pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583 container agnhost-container: +STEP: delete the pod 03/07/23 03:51:37.152 +Mar 7 03:51:37.163: INFO: Waiting for pod pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583 to disappear +Mar 7 03:51:37.165: INFO: Pod pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:51:37.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2346" for this suite. 03/07/23 03:51:37.168 +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","completed":310,"skipped":5793,"failed":0} +------------------------------ +• [4.068 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:88 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:33.106 + Mar 7 03:51:33.106: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:51:33.106 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:33.119 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:33.121 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:88 + STEP: Creating configMap with name configmap-test-volume-map-020076b6-a90f-4eb1-848d-cb6ff28ed485 03/07/23 03:51:33.129 + STEP: Creating a pod to test consume configMaps 03/07/23 03:51:33.132 + Mar 7 03:51:33.138: INFO: Waiting up to 5m0s for pod "pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583" in namespace "configmap-2346" to be "Succeeded or Failed" + Mar 7 03:51:33.141: INFO: Pod "pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260456ms + Mar 7 03:51:35.144: INFO: Pod "pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00527033s + Mar 7 03:51:37.144: INFO: Pod "pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006059998s + STEP: Saw pod success 03/07/23 03:51:37.144 + Mar 7 03:51:37.145: INFO: Pod "pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583" satisfied condition "Succeeded or Failed" + Mar 7 03:51:37.147: INFO: Trying to get logs from node node-2 pod pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583 container agnhost-container: + STEP: delete the pod 03/07/23 03:51:37.152 + Mar 7 03:51:37.163: INFO: Waiting for pod pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583 to disappear + Mar 7 03:51:37.165: INFO: Pod pod-configmaps-fce0a1f3-bc0a-4aeb-b928-c6b0e15e8583 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:51:37.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-2346" for this suite. 03/07/23 03:51:37.168 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:65 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:37.174 +Mar 7 03:51:37.174: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 03:51:37.176 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:37.187 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:37.189 +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:65 +STEP: Counting existing ResourceQuota 03/07/23 03:51:37.191 +STEP: Creating a ResourceQuota 03/07/23 03:51:42.195 +STEP: Ensuring resource quota status is calculated 03/07/23 03:51:42.216 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 03:51:44.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6151" for this suite. 03/07/23 03:51:44.223 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","completed":311,"skipped":5797,"failed":0} +------------------------------ +• [SLOW TEST] [7.077 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:65 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:37.174 + Mar 7 03:51:37.174: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 03:51:37.176 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:37.187 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:37.189 + [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:65 + STEP: Counting existing ResourceQuota 03/07/23 03:51:37.191 + STEP: Creating a ResourceQuota 03/07/23 03:51:42.195 + STEP: Ensuring resource quota status is calculated 03/07/23 03:51:42.216 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 03:51:44.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-6151" for this suite. 03/07/23 03:51:44.223 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:503 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:44.251 +Mar 7 03:51:44.251: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:51:44.253 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:44.269 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:44.271 +[It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:503 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:51:44.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2693" for this suite. 03/07/23 03:51:44.301 +{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","completed":312,"skipped":5800,"failed":0} +------------------------------ +• [0.054 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:503 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:44.251 + Mar 7 03:51:44.251: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:51:44.253 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:44.269 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:44.271 + [It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:503 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:51:44.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-2693" for this suite. 03/07/23 03:51:44.301 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:431 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:44.306 +Mar 7 03:51:44.306: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename daemonsets 03/07/23 03:51:44.307 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:44.319 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:44.322 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:431 +Mar 7 03:51:44.344: INFO: Create a RollingUpdate DaemonSet +Mar 7 03:51:44.348: INFO: Check that daemon pods launch on every node of the cluster +Mar 7 03:51:44.357: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:51:44.357: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 03:51:45.364: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:51:45.364: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 03:51:46.363: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Mar 7 03:51:46.363: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +Mar 7 03:51:46.363: INFO: Update the DaemonSet to trigger a rollout +Mar 7 03:51:46.369: INFO: Updating DaemonSet daemon-set +Mar 7 03:51:49.382: INFO: Roll back the DaemonSet before rollout is complete +Mar 7 03:51:49.391: INFO: Updating DaemonSet daemon-set +Mar 7 03:51:49.391: INFO: Make sure DaemonSet rollback is complete +Mar 7 03:51:49.398: INFO: Wrong image for pod: daemon-set-tfzxq. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-2, got: foo:non-existent. +Mar 7 03:51:49.398: INFO: Pod daemon-set-tfzxq is not available +Mar 7 03:51:52.410: INFO: Pod daemon-set-x86mk is not available +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" 03/07/23 03:51:52.418 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6187, will wait for the garbage collector to delete the pods 03/07/23 03:51:52.418 +Mar 7 03:51:52.475: INFO: Deleting DaemonSet.extensions daemon-set took: 4.700519ms +Mar 7 03:51:52.575: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.212769ms +Mar 7 03:51:55.578: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:51:55.578: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Mar 7 03:51:55.580: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"74384"},"items":null} + +Mar 7 03:51:55.583: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"74384"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:51:55.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6187" for this suite. 03/07/23 03:51:55.597 +{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","completed":313,"skipped":5805,"failed":0} +------------------------------ +• [SLOW TEST] [11.296 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:431 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:44.306 + Mar 7 03:51:44.306: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename daemonsets 03/07/23 03:51:44.307 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:44.319 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:44.322 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 + [It] should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:431 + Mar 7 03:51:44.344: INFO: Create a RollingUpdate DaemonSet + Mar 7 03:51:44.348: INFO: Check that daemon pods launch on every node of the cluster + Mar 7 03:51:44.357: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:51:44.357: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 03:51:45.364: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:51:45.364: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 03:51:46.363: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Mar 7 03:51:46.363: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + Mar 7 03:51:46.363: INFO: Update the DaemonSet to trigger a rollout + Mar 7 03:51:46.369: INFO: Updating DaemonSet daemon-set + Mar 7 03:51:49.382: INFO: Roll back the DaemonSet before rollout is complete + Mar 7 03:51:49.391: INFO: Updating DaemonSet daemon-set + Mar 7 03:51:49.391: INFO: Make sure DaemonSet rollback is complete + Mar 7 03:51:49.398: INFO: Wrong image for pod: daemon-set-tfzxq. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-2, got: foo:non-existent. + Mar 7 03:51:49.398: INFO: Pod daemon-set-tfzxq is not available + Mar 7 03:51:52.410: INFO: Pod daemon-set-x86mk is not available + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 + STEP: Deleting DaemonSet "daemon-set" 03/07/23 03:51:52.418 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6187, will wait for the garbage collector to delete the pods 03/07/23 03:51:52.418 + Mar 7 03:51:52.475: INFO: Deleting DaemonSet.extensions daemon-set took: 4.700519ms + Mar 7 03:51:52.575: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.212769ms + Mar 7 03:51:55.578: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:51:55.578: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Mar 7 03:51:55.580: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"74384"},"items":null} + + Mar 7 03:51:55.583: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"74384"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:51:55.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "daemonsets-6187" for this suite. 03/07/23 03:51:55.597 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:43 +[BeforeEach] [sig-storage] Projected combined + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:55.603 +Mar 7 03:51:55.603: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:51:55.603 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:55.616 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:55.618 +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:43 +STEP: Creating configMap with name configmap-projected-all-test-volume-3eea812d-8ac3-425b-8de1-76a138c7081d 03/07/23 03:51:55.62 +STEP: Creating secret with name secret-projected-all-test-volume-83f69a7a-f9cf-45cc-831c-3f703eeb1b9a 03/07/23 03:51:55.625 +STEP: Creating a pod to test Check all projections for projected volume plugin 03/07/23 03:51:55.628 +Mar 7 03:51:55.636: INFO: Waiting up to 5m0s for pod "projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5" in namespace "projected-142" to be "Succeeded or Failed" +Mar 7 03:51:55.640: INFO: Pod "projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.000346ms +Mar 7 03:51:57.645: INFO: Pod "projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008590506s +Mar 7 03:51:59.643: INFO: Pod "projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007337017s +STEP: Saw pod success 03/07/23 03:51:59.643 +Mar 7 03:51:59.643: INFO: Pod "projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5" satisfied condition "Succeeded or Failed" +Mar 7 03:51:59.645: INFO: Trying to get logs from node node-2 pod projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5 container projected-all-volume-test: +STEP: delete the pod 03/07/23 03:51:59.65 +Mar 7 03:51:59.660: INFO: Waiting for pod projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5 to disappear +Mar 7 03:51:59.662: INFO: Pod projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5 no longer exists +[AfterEach] [sig-storage] Projected combined + test/e2e/framework/framework.go:187 +Mar 7 03:51:59.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-142" for this suite. 03/07/23 03:51:59.665 +{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","completed":314,"skipped":5825,"failed":0} +------------------------------ +• [4.066 seconds] +[sig-storage] Projected combined +test/e2e/common/storage/framework.go:23 + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:43 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected combined + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:55.603 + Mar 7 03:51:55.603: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:51:55.603 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:55.616 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:55.618 + [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:43 + STEP: Creating configMap with name configmap-projected-all-test-volume-3eea812d-8ac3-425b-8de1-76a138c7081d 03/07/23 03:51:55.62 + STEP: Creating secret with name secret-projected-all-test-volume-83f69a7a-f9cf-45cc-831c-3f703eeb1b9a 03/07/23 03:51:55.625 + STEP: Creating a pod to test Check all projections for projected volume plugin 03/07/23 03:51:55.628 + Mar 7 03:51:55.636: INFO: Waiting up to 5m0s for pod "projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5" in namespace "projected-142" to be "Succeeded or Failed" + Mar 7 03:51:55.640: INFO: Pod "projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.000346ms + Mar 7 03:51:57.645: INFO: Pod "projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008590506s + Mar 7 03:51:59.643: INFO: Pod "projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007337017s + STEP: Saw pod success 03/07/23 03:51:59.643 + Mar 7 03:51:59.643: INFO: Pod "projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5" satisfied condition "Succeeded or Failed" + Mar 7 03:51:59.645: INFO: Trying to get logs from node node-2 pod projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5 container projected-all-volume-test: + STEP: delete the pod 03/07/23 03:51:59.65 + Mar 7 03:51:59.660: INFO: Waiting for pod projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5 to disappear + Mar 7 03:51:59.662: INFO: Pod projected-volume-a770277a-94ab-454d-8aa8-649a00ad1ca5 no longer exists + [AfterEach] [sig-storage] Projected combined + test/e2e/framework/framework.go:187 + Mar 7 03:51:59.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-142" for this suite. 03/07/23 03:51:59.665 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:51:59.67 +Mar 7 03:51:59.670: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replicaset 03/07/23 03:51:59.67 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:59.683 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:59.685 +[It] Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota 03/07/23 03:51:59.687 +Mar 7 03:51:59.693: INFO: Pod name sample-pod: Found 0 pods out of 1 +Mar 7 03:52:04.696: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 03/07/23 03:52:04.696 +STEP: getting scale subresource 03/07/23 03:52:04.696 +STEP: updating a scale subresource 03/07/23 03:52:04.698 +STEP: verifying the replicaset Spec.Replicas was modified 03/07/23 03:52:04.703 +STEP: Patch a scale subresource 03/07/23 03:52:04.706 +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +Mar 7 03:52:04.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-5141" for this suite. 03/07/23 03:52:04.718 +{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","completed":315,"skipped":5843,"failed":0} +------------------------------ +• [SLOW TEST] [5.056 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:51:59.67 + Mar 7 03:51:59.670: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replicaset 03/07/23 03:51:59.67 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:51:59.683 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:51:59.685 + [It] Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 + STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota 03/07/23 03:51:59.687 + Mar 7 03:51:59.693: INFO: Pod name sample-pod: Found 0 pods out of 1 + Mar 7 03:52:04.696: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 03/07/23 03:52:04.696 + STEP: getting scale subresource 03/07/23 03:52:04.696 + STEP: updating a scale subresource 03/07/23 03:52:04.698 + STEP: verifying the replicaset Spec.Replicas was modified 03/07/23 03:52:04.703 + STEP: Patch a scale subresource 03/07/23 03:52:04.706 + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 + Mar 7 03:52:04.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replicaset-5141" for this suite. 03/07/23 03:52:04.718 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2237 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:04.728 +Mar 7 03:52:04.728: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:52:04.73 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:04.746 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:04.748 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2237 +STEP: creating service in namespace services-5425 03/07/23 03:52:04.75 +STEP: creating service affinity-nodeport-transition in namespace services-5425 03/07/23 03:52:04.75 +STEP: creating replication controller affinity-nodeport-transition in namespace services-5425 03/07/23 03:52:04.765 +I0307 03:52:04.770803 22 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-5425, replica count: 3 +I0307 03:52:07.822374 22 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 03:52:07.829: INFO: Creating new exec pod +Mar 7 03:52:07.835: INFO: Waiting up to 5m0s for pod "execpod-affinity5v5l6" in namespace "services-5425" to be "running" +Mar 7 03:52:07.838: INFO: Pod "execpod-affinity5v5l6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.238451ms +Mar 7 03:52:09.848: INFO: Pod "execpod-affinity5v5l6": Phase="Running", Reason="", readiness=true. Elapsed: 2.013220938s +Mar 7 03:52:09.848: INFO: Pod "execpod-affinity5v5l6" satisfied condition "running" +Mar 7 03:52:10.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' +Mar 7 03:52:11.034: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Mar 7 03:52:11.034: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:52:11.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.106.123.161 80' +Mar 7 03:52:11.217: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.106.123.161 80\nConnection to 10.106.123.161 80 port [tcp/http] succeeded!\n" +Mar 7 03:52:11.217: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:52:11.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.100 31732' +Mar 7 03:52:11.399: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.100 31732\nConnection to 192.168.1.100 31732 port [tcp/*] succeeded!\n" +Mar 7 03:52:11.399: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:52:11.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.101 31732' +Mar 7 03:52:11.597: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.101 31732\nConnection to 192.168.1.101 31732 port [tcp/*] succeeded!\n" +Mar 7 03:52:11.597: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:52:11.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://192.168.1.100:31732/ ; done' +Mar 7 03:52:11.854: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n" +Mar 7 03:52:11.854: INFO: stdout: "\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-sb9lv\naffinity-nodeport-transition-sb9lv\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-sb9lv\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-sb9lv" +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-sb9lv +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-sb9lv +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-sb9lv +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-sb9lv +Mar 7 03:52:11.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://192.168.1.100:31732/ ; done' +Mar 7 03:52:12.105: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n" +Mar 7 03:52:12.105: INFO: stdout: "\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt" +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt +Mar 7 03:52:12.105: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5425, will wait for the garbage collector to delete the pods 03/07/23 03:52:12.121 +Mar 7 03:52:12.180: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.142228ms +Mar 7 03:52:12.281: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.991462ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:52:14.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5425" for this suite. 03/07/23 03:52:14.711 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","completed":316,"skipped":5848,"failed":0} +------------------------------ +• [SLOW TEST] [9.988 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2237 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:04.728 + Mar 7 03:52:04.728: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:52:04.73 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:04.746 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:04.748 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2237 + STEP: creating service in namespace services-5425 03/07/23 03:52:04.75 + STEP: creating service affinity-nodeport-transition in namespace services-5425 03/07/23 03:52:04.75 + STEP: creating replication controller affinity-nodeport-transition in namespace services-5425 03/07/23 03:52:04.765 + I0307 03:52:04.770803 22 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-5425, replica count: 3 + I0307 03:52:07.822374 22 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 03:52:07.829: INFO: Creating new exec pod + Mar 7 03:52:07.835: INFO: Waiting up to 5m0s for pod "execpod-affinity5v5l6" in namespace "services-5425" to be "running" + Mar 7 03:52:07.838: INFO: Pod "execpod-affinity5v5l6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.238451ms + Mar 7 03:52:09.848: INFO: Pod "execpod-affinity5v5l6": Phase="Running", Reason="", readiness=true. Elapsed: 2.013220938s + Mar 7 03:52:09.848: INFO: Pod "execpod-affinity5v5l6" satisfied condition "running" + Mar 7 03:52:10.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' + Mar 7 03:52:11.034: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" + Mar 7 03:52:11.034: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:52:11.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.106.123.161 80' + Mar 7 03:52:11.217: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.106.123.161 80\nConnection to 10.106.123.161 80 port [tcp/http] succeeded!\n" + Mar 7 03:52:11.217: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:52:11.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.100 31732' + Mar 7 03:52:11.399: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.100 31732\nConnection to 192.168.1.100 31732 port [tcp/*] succeeded!\n" + Mar 7 03:52:11.399: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:52:11.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.1.101 31732' + Mar 7 03:52:11.597: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 192.168.1.101 31732\nConnection to 192.168.1.101 31732 port [tcp/*] succeeded!\n" + Mar 7 03:52:11.597: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:52:11.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://192.168.1.100:31732/ ; done' + Mar 7 03:52:11.854: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n" + Mar 7 03:52:11.854: INFO: stdout: "\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-sb9lv\naffinity-nodeport-transition-sb9lv\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-sb9lv\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-wvwz8\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-sb9lv" + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-sb9lv + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-sb9lv + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-sb9lv + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-wvwz8 + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:11.854: INFO: Received response from host: affinity-nodeport-transition-sb9lv + Mar 7 03:52:11.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5425 exec execpod-affinity5v5l6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://192.168.1.100:31732/ ; done' + Mar 7 03:52:12.105: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n+ echo\n+ curl -q -s --connect-timeout 2 http://192.168.1.100:31732/\n" + Mar 7 03:52:12.105: INFO: stdout: "\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt\naffinity-nodeport-transition-4b5pt" + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Received response from host: affinity-nodeport-transition-4b5pt + Mar 7 03:52:12.105: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5425, will wait for the garbage collector to delete the pods 03/07/23 03:52:12.121 + Mar 7 03:52:12.180: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.142228ms + Mar 7 03:52:12.281: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.991462ms + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:52:14.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-5425" for this suite. 03/07/23 03:52:14.711 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:193 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:14.718 +Mar 7 03:52:14.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename daemonsets 03/07/23 03:52:14.719 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:14.732 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:14.734 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:193 +Mar 7 03:52:14.750: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. 03/07/23 03:52:14.753 +Mar 7 03:52:14.755: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:52:14.756: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Change node label to blue, check that daemon pod is launched. 03/07/23 03:52:14.756 +Mar 7 03:52:14.777: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:52:14.777: INFO: Node node-2 is running 0 daemon pod, expected 1 +Mar 7 03:52:15.781: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Mar 7 03:52:15.781: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +STEP: Update the node label to green, and wait for daemons to be unscheduled 03/07/23 03:52:15.783 +Mar 7 03:52:15.798: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Mar 7 03:52:15.798: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set +Mar 7 03:52:16.801: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:52:16.801: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 03/07/23 03:52:16.801 +Mar 7 03:52:16.809: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:52:16.809: INFO: Node node-2 is running 0 daemon pod, expected 1 +Mar 7 03:52:17.813: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:52:17.813: INFO: Node node-2 is running 0 daemon pod, expected 1 +Mar 7 03:52:18.812: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:52:18.812: INFO: Node node-2 is running 0 daemon pod, expected 1 +Mar 7 03:52:19.815: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Mar 7 03:52:19.815: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" 03/07/23 03:52:19.821 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3296, will wait for the garbage collector to delete the pods 03/07/23 03:52:19.821 +Mar 7 03:52:19.879: INFO: Deleting DaemonSet.extensions daemon-set took: 4.80746ms +Mar 7 03:52:19.980: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.470655ms +Mar 7 03:52:22.683: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:52:22.683: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Mar 7 03:52:22.684: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"74822"},"items":null} + +Mar 7 03:52:22.686: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"74822"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:52:22.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-3296" for this suite. 03/07/23 03:52:22.713 +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","completed":317,"skipped":5923,"failed":0} +------------------------------ +• [SLOW TEST] [7.999 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:193 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:14.718 + Mar 7 03:52:14.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename daemonsets 03/07/23 03:52:14.719 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:14.732 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:14.734 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 + [It] should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:193 + Mar 7 03:52:14.750: INFO: Creating daemon "daemon-set" with a node selector + STEP: Initially, daemon pods should not be running on any nodes. 03/07/23 03:52:14.753 + Mar 7 03:52:14.755: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:52:14.756: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + STEP: Change node label to blue, check that daemon pod is launched. 03/07/23 03:52:14.756 + Mar 7 03:52:14.777: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:52:14.777: INFO: Node node-2 is running 0 daemon pod, expected 1 + Mar 7 03:52:15.781: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Mar 7 03:52:15.781: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set + STEP: Update the node label to green, and wait for daemons to be unscheduled 03/07/23 03:52:15.783 + Mar 7 03:52:15.798: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Mar 7 03:52:15.798: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set + Mar 7 03:52:16.801: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:52:16.801: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 03/07/23 03:52:16.801 + Mar 7 03:52:16.809: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:52:16.809: INFO: Node node-2 is running 0 daemon pod, expected 1 + Mar 7 03:52:17.813: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:52:17.813: INFO: Node node-2 is running 0 daemon pod, expected 1 + Mar 7 03:52:18.812: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:52:18.812: INFO: Node node-2 is running 0 daemon pod, expected 1 + Mar 7 03:52:19.815: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Mar 7 03:52:19.815: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 + STEP: Deleting DaemonSet "daemon-set" 03/07/23 03:52:19.821 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3296, will wait for the garbage collector to delete the pods 03/07/23 03:52:19.821 + Mar 7 03:52:19.879: INFO: Deleting DaemonSet.extensions daemon-set took: 4.80746ms + Mar 7 03:52:19.980: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.470655ms + Mar 7 03:52:22.683: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:52:22.683: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Mar 7 03:52:22.684: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"74822"},"items":null} + + Mar 7 03:52:22.686: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"74822"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:52:22.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "daemonsets-3296" for this suite. 03/07/23 03:52:22.713 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:507 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:22.718 +Mar 7 03:52:22.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 03:52:22.719 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:22.735 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:22.736 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 03:52:22.748 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:52:23.514 +STEP: Deploying the webhook pod 03/07/23 03:52:23.521 +STEP: Wait for the deployment to be ready 03/07/23 03:52:23.529 +Mar 7 03:52:23.540: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 03:52:25.548 +STEP: Verifying the service has paired with the endpoint 03/07/23 03:52:25.579 +Mar 7 03:52:26.579: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:507 +STEP: Creating a mutating webhook configuration 03/07/23 03:52:26.582 +STEP: Updating a mutating webhook configuration's rules to not include the create operation 03/07/23 03:52:26.608 +STEP: Creating a configMap that should not be mutated 03/07/23 03:52:26.612 +STEP: Patching a mutating webhook configuration's rules to include the create operation 03/07/23 03:52:26.62 +STEP: Creating a configMap that should be mutated 03/07/23 03:52:26.625 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:52:26.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2543" for this suite. 03/07/23 03:52:26.645 +STEP: Destroying namespace "webhook-2543-markers" for this suite. 03/07/23 03:52:26.649 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","completed":318,"skipped":5932,"failed":0} +------------------------------ +• [3.989 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:507 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:22.718 + Mar 7 03:52:22.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 03:52:22.719 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:22.735 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:22.736 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 03:52:22.748 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 03:52:23.514 + STEP: Deploying the webhook pod 03/07/23 03:52:23.521 + STEP: Wait for the deployment to be ready 03/07/23 03:52:23.529 + Mar 7 03:52:23.540: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 03:52:25.548 + STEP: Verifying the service has paired with the endpoint 03/07/23 03:52:25.579 + Mar 7 03:52:26.579: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:507 + STEP: Creating a mutating webhook configuration 03/07/23 03:52:26.582 + STEP: Updating a mutating webhook configuration's rules to not include the create operation 03/07/23 03:52:26.608 + STEP: Creating a configMap that should not be mutated 03/07/23 03:52:26.612 + STEP: Patching a mutating webhook configuration's rules to include the create operation 03/07/23 03:52:26.62 + STEP: Creating a configMap that should be mutated 03/07/23 03:52:26.625 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:52:26.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-2543" for this suite. 03/07/23 03:52:26.645 + STEP: Destroying namespace "webhook-2543-markers" for this suite. 03/07/23 03:52:26.649 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 +[BeforeEach] version v1 + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:26.707 +Mar 7 03:52:26.707: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename proxy 03/07/23 03:52:26.708 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:26.732 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:26.735 +[It] A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 +Mar 7 03:52:26.739: INFO: Creating pod... +Mar 7 03:52:26.746: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-5599" to be "running" +Mar 7 03:52:26.749: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27783ms +Mar 7 03:52:28.752: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.005915997s +Mar 7 03:52:28.752: INFO: Pod "agnhost" satisfied condition "running" +Mar 7 03:52:28.752: INFO: Creating service... +Mar 7 03:52:28.767: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=DELETE +Mar 7 03:52:28.773: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Mar 7 03:52:28.773: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=OPTIONS +Mar 7 03:52:28.777: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Mar 7 03:52:28.777: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=PATCH +Mar 7 03:52:28.814: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Mar 7 03:52:28.814: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=POST +Mar 7 03:52:28.817: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Mar 7 03:52:28.817: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=PUT +Mar 7 03:52:28.832: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Mar 7 03:52:28.832: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=DELETE +Mar 7 03:52:28.838: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Mar 7 03:52:28.838: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=OPTIONS +Mar 7 03:52:28.842: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Mar 7 03:52:28.842: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=PATCH +Mar 7 03:52:28.846: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Mar 7 03:52:28.846: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=POST +Mar 7 03:52:28.854: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Mar 7 03:52:28.854: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=PUT +Mar 7 03:52:28.858: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Mar 7 03:52:28.858: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=GET +Mar 7 03:52:28.861: INFO: http.Client request:GET StatusCode:301 +Mar 7 03:52:28.861: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=GET +Mar 7 03:52:28.864: INFO: http.Client request:GET StatusCode:301 +Mar 7 03:52:28.864: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=HEAD +Mar 7 03:52:28.866: INFO: http.Client request:HEAD StatusCode:301 +Mar 7 03:52:28.866: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=HEAD +Mar 7 03:52:28.870: INFO: http.Client request:HEAD StatusCode:301 +[AfterEach] version v1 + test/e2e/framework/framework.go:187 +Mar 7 03:52:28.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-5599" for this suite. 03/07/23 03:52:28.875 +{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]","completed":319,"skipped":5942,"failed":0} +------------------------------ +• [2.172 seconds] +[sig-network] Proxy +test/e2e/network/common/framework.go:23 + version v1 + test/e2e/network/proxy.go:74 + A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] version v1 + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:26.707 + Mar 7 03:52:26.707: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename proxy 03/07/23 03:52:26.708 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:26.732 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:26.735 + [It] A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 + Mar 7 03:52:26.739: INFO: Creating pod... + Mar 7 03:52:26.746: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-5599" to be "running" + Mar 7 03:52:26.749: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27783ms + Mar 7 03:52:28.752: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.005915997s + Mar 7 03:52:28.752: INFO: Pod "agnhost" satisfied condition "running" + Mar 7 03:52:28.752: INFO: Creating service... + Mar 7 03:52:28.767: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=DELETE + Mar 7 03:52:28.773: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Mar 7 03:52:28.773: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=OPTIONS + Mar 7 03:52:28.777: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Mar 7 03:52:28.777: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=PATCH + Mar 7 03:52:28.814: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Mar 7 03:52:28.814: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=POST + Mar 7 03:52:28.817: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Mar 7 03:52:28.817: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=PUT + Mar 7 03:52:28.832: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Mar 7 03:52:28.832: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=DELETE + Mar 7 03:52:28.838: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Mar 7 03:52:28.838: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=OPTIONS + Mar 7 03:52:28.842: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Mar 7 03:52:28.842: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=PATCH + Mar 7 03:52:28.846: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Mar 7 03:52:28.846: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=POST + Mar 7 03:52:28.854: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Mar 7 03:52:28.854: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=PUT + Mar 7 03:52:28.858: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Mar 7 03:52:28.858: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=GET + Mar 7 03:52:28.861: INFO: http.Client request:GET StatusCode:301 + Mar 7 03:52:28.861: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=GET + Mar 7 03:52:28.864: INFO: http.Client request:GET StatusCode:301 + Mar 7 03:52:28.864: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/pods/agnhost/proxy?method=HEAD + Mar 7 03:52:28.866: INFO: http.Client request:HEAD StatusCode:301 + Mar 7 03:52:28.866: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5599/services/e2e-proxy-test-service/proxy?method=HEAD + Mar 7 03:52:28.870: INFO: http.Client request:HEAD StatusCode:301 + [AfterEach] version v1 + test/e2e/framework/framework.go:187 + Mar 7 03:52:28.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "proxy-5599" for this suite. 03/07/23 03:52:28.875 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:28.88 +Mar 7 03:52:28.880: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename custom-resource-definition 03/07/23 03:52:28.88 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:28.91 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:28.913 +[It] should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 +STEP: fetching the /apis discovery document 03/07/23 03:52:28.914 +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document 03/07/23 03:52:28.915 +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document 03/07/23 03:52:28.915 +STEP: fetching the /apis/apiextensions.k8s.io discovery document 03/07/23 03:52:28.915 +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document 03/07/23 03:52:28.916 +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document 03/07/23 03:52:28.916 +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document 03/07/23 03:52:28.917 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:52:28.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-1253" for this suite. 03/07/23 03:52:28.921 +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","completed":320,"skipped":5959,"failed":0} +------------------------------ +• [0.045 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:28.88 + Mar 7 03:52:28.880: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename custom-resource-definition 03/07/23 03:52:28.88 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:28.91 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:28.913 + [It] should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 + STEP: fetching the /apis discovery document 03/07/23 03:52:28.914 + STEP: finding the apiextensions.k8s.io API group in the /apis discovery document 03/07/23 03:52:28.915 + STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document 03/07/23 03:52:28.915 + STEP: fetching the /apis/apiextensions.k8s.io discovery document 03/07/23 03:52:28.915 + STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document 03/07/23 03:52:28.916 + STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document 03/07/23 03:52:28.916 + STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document 03/07/23 03:52:28.917 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:52:28.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "custom-resource-definition-1253" for this suite. 03/07/23 03:52:28.921 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:45 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:28.928 +Mar 7 03:52:28.928: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:52:28.929 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:28.943 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:28.945 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:45 +STEP: Creating projection with secret that has name projected-secret-test-6c336d42-7530-40fb-8e37-87b13515888d 03/07/23 03:52:28.948 +STEP: Creating a pod to test consume secrets 03/07/23 03:52:28.955 +Mar 7 03:52:28.964: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55" in namespace "projected-1933" to be "Succeeded or Failed" +Mar 7 03:52:28.967: INFO: Pod "pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946958ms +Mar 7 03:52:30.971: INFO: Pod "pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007104704s +Mar 7 03:52:32.971: INFO: Pod "pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007012275s +STEP: Saw pod success 03/07/23 03:52:32.971 +Mar 7 03:52:32.971: INFO: Pod "pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55" satisfied condition "Succeeded or Failed" +Mar 7 03:52:32.973: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55 container projected-secret-volume-test: +STEP: delete the pod 03/07/23 03:52:32.987 +Mar 7 03:52:33.001: INFO: Waiting for pod pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55 to disappear +Mar 7 03:52:33.003: INFO: Pod pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 +Mar 7 03:52:33.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1933" for this suite. 03/07/23 03:52:33.007 +{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","completed":321,"skipped":5978,"failed":0} +------------------------------ +• [4.083 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:45 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:28.928 + Mar 7 03:52:28.928: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:52:28.929 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:28.943 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:28.945 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:45 + STEP: Creating projection with secret that has name projected-secret-test-6c336d42-7530-40fb-8e37-87b13515888d 03/07/23 03:52:28.948 + STEP: Creating a pod to test consume secrets 03/07/23 03:52:28.955 + Mar 7 03:52:28.964: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55" in namespace "projected-1933" to be "Succeeded or Failed" + Mar 7 03:52:28.967: INFO: Pod "pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946958ms + Mar 7 03:52:30.971: INFO: Pod "pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007104704s + Mar 7 03:52:32.971: INFO: Pod "pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007012275s + STEP: Saw pod success 03/07/23 03:52:32.971 + Mar 7 03:52:32.971: INFO: Pod "pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55" satisfied condition "Succeeded or Failed" + Mar 7 03:52:32.973: INFO: Trying to get logs from node node-2 pod pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55 container projected-secret-volume-test: + STEP: delete the pod 03/07/23 03:52:32.987 + Mar 7 03:52:33.001: INFO: Waiting for pod pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55 to disappear + Mar 7 03:52:33.003: INFO: Pod pod-projected-secrets-2332b432-68a2-4402-925f-70b02bda7f55 no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/framework.go:187 + Mar 7 03:52:33.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-1933" for this suite. 03/07/23 03:52:33.007 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:861 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:33.013 +Mar 7 03:52:33.013: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename daemonsets 03/07/23 03:52:33.014 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:33.027 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:33.029 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 +[It] should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:861 +STEP: Creating simple DaemonSet "daemon-set" 03/07/23 03:52:33.05 +STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 03:52:33.053 +Mar 7 03:52:33.059: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:52:33.059: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 03:52:34.075: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:52:34.075: INFO: Node bootstrap is running 0 daemon pod, expected 1 +Mar 7 03:52:35.067: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Mar 7 03:52:35.067: INFO: Node node-1 is running 0 daemon pod, expected 1 +Mar 7 03:52:36.066: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Mar 7 03:52:36.066: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Getting /status 03/07/23 03:52:36.068 +Mar 7 03:52:36.071: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status 03/07/23 03:52:36.071 +Mar 7 03:52:36.078: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated 03/07/23 03:52:36.078 +Mar 7 03:52:36.079: INFO: Observed &DaemonSet event: ADDED +Mar 7 03:52:36.079: INFO: Observed &DaemonSet event: MODIFIED +Mar 7 03:52:36.080: INFO: Observed &DaemonSet event: MODIFIED +Mar 7 03:52:36.080: INFO: Observed &DaemonSet event: MODIFIED +Mar 7 03:52:36.080: INFO: Observed &DaemonSet event: MODIFIED +Mar 7 03:52:36.080: INFO: Found daemon set daemon-set in namespace daemonsets-7368 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Mar 7 03:52:36.080: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status 03/07/23 03:52:36.08 +STEP: watching for the daemon set status to be patched 03/07/23 03:52:36.085 +Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: ADDED +Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: MODIFIED +Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: MODIFIED +Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: MODIFIED +Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: MODIFIED +Mar 7 03:52:36.086: INFO: Observed daemon set daemon-set in namespace daemonsets-7368 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: MODIFIED +Mar 7 03:52:36.086: INFO: Found daemon set daemon-set in namespace daemonsets-7368 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Mar 7 03:52:36.086: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 +STEP: Deleting DaemonSet "daemon-set" 03/07/23 03:52:36.088 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7368, will wait for the garbage collector to delete the pods 03/07/23 03:52:36.089 +Mar 7 03:52:36.147: INFO: Deleting DaemonSet.extensions daemon-set took: 5.330951ms +Mar 7 03:52:36.248: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.722531ms +Mar 7 03:52:38.152: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Mar 7 03:52:38.152: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Mar 7 03:52:38.154: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"75124"},"items":null} + +Mar 7 03:52:38.156: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"75124"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 +Mar 7 03:52:38.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-7368" for this suite. 03/07/23 03:52:38.169 +{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","completed":322,"skipped":6025,"failed":0} +------------------------------ +• [SLOW TEST] [5.161 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:861 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:33.013 + Mar 7 03:52:33.013: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename daemonsets 03/07/23 03:52:33.014 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:33.027 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:33.029 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:145 + [It] should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:861 + STEP: Creating simple DaemonSet "daemon-set" 03/07/23 03:52:33.05 + STEP: Check that daemon pods launch on every node of the cluster. 03/07/23 03:52:33.053 + Mar 7 03:52:33.059: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:52:33.059: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 03:52:34.075: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:52:34.075: INFO: Node bootstrap is running 0 daemon pod, expected 1 + Mar 7 03:52:35.067: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Mar 7 03:52:35.067: INFO: Node node-1 is running 0 daemon pod, expected 1 + Mar 7 03:52:36.066: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Mar 7 03:52:36.066: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Getting /status 03/07/23 03:52:36.068 + Mar 7 03:52:36.071: INFO: Daemon Set daemon-set has Conditions: [] + STEP: updating the DaemonSet Status 03/07/23 03:52:36.071 + Mar 7 03:52:36.078: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the daemon set status to be updated 03/07/23 03:52:36.078 + Mar 7 03:52:36.079: INFO: Observed &DaemonSet event: ADDED + Mar 7 03:52:36.079: INFO: Observed &DaemonSet event: MODIFIED + Mar 7 03:52:36.080: INFO: Observed &DaemonSet event: MODIFIED + Mar 7 03:52:36.080: INFO: Observed &DaemonSet event: MODIFIED + Mar 7 03:52:36.080: INFO: Observed &DaemonSet event: MODIFIED + Mar 7 03:52:36.080: INFO: Found daemon set daemon-set in namespace daemonsets-7368 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Mar 7 03:52:36.080: INFO: Daemon set daemon-set has an updated status + STEP: patching the DaemonSet Status 03/07/23 03:52:36.08 + STEP: watching for the daemon set status to be patched 03/07/23 03:52:36.085 + Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: ADDED + Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: MODIFIED + Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: MODIFIED + Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: MODIFIED + Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: MODIFIED + Mar 7 03:52:36.086: INFO: Observed daemon set daemon-set in namespace daemonsets-7368 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Mar 7 03:52:36.086: INFO: Observed &DaemonSet event: MODIFIED + Mar 7 03:52:36.086: INFO: Found daemon set daemon-set in namespace daemonsets-7368 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] + Mar 7 03:52:36.086: INFO: Daemon set daemon-set has a patched status + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:110 + STEP: Deleting DaemonSet "daemon-set" 03/07/23 03:52:36.088 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7368, will wait for the garbage collector to delete the pods 03/07/23 03:52:36.089 + Mar 7 03:52:36.147: INFO: Deleting DaemonSet.extensions daemon-set took: 5.330951ms + Mar 7 03:52:36.248: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.722531ms + Mar 7 03:52:38.152: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Mar 7 03:52:38.152: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Mar 7 03:52:38.154: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"75124"},"items":null} + + Mar 7 03:52:38.156: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"75124"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/framework.go:187 + Mar 7 03:52:38.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "daemonsets-7368" for this suite. 03/07/23 03:52:38.169 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:130 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:38.174 +Mar 7 03:52:38.174: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-lifecycle-hook 03/07/23 03:52:38.176 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:38.191 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:38.193 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 +STEP: create the container to handle the HTTPGet hook request. 03/07/23 03:52:38.199 +Mar 7 03:52:38.205: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-3146" to be "running and ready" +Mar 7 03:52:38.208: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637976ms +Mar 7 03:52:38.208: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:52:40.212: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.006990803s +Mar 7 03:52:40.212: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Mar 7 03:52:40.212: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:130 +STEP: create the pod with lifecycle hook 03/07/23 03:52:40.214 +Mar 7 03:52:40.219: INFO: Waiting up to 5m0s for pod "pod-with-poststart-http-hook" in namespace "container-lifecycle-hook-3146" to be "running and ready" +Mar 7 03:52:40.221: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319186ms +Mar 7 03:52:40.221: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:52:42.224: INFO: Pod "pod-with-poststart-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.005213836s +Mar 7 03:52:42.224: INFO: The phase of Pod pod-with-poststart-http-hook is Running (Ready = true) +Mar 7 03:52:42.224: INFO: Pod "pod-with-poststart-http-hook" satisfied condition "running and ready" +STEP: check poststart hook 03/07/23 03:52:42.226 +STEP: delete the pod with lifecycle hook 03/07/23 03:52:42.239 +Mar 7 03:52:42.244: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Mar 7 03:52:42.250: INFO: Pod pod-with-poststart-http-hook still exists +Mar 7 03:52:44.251: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Mar 7 03:52:44.254: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 +Mar 7 03:52:44.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-3146" for this suite. 03/07/23 03:52:44.257 +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","completed":323,"skipped":6036,"failed":0} +------------------------------ +• [SLOW TEST] [6.087 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:130 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:38.174 + Mar 7 03:52:38.174: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-lifecycle-hook 03/07/23 03:52:38.176 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:38.191 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:38.193 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:55 + STEP: create the container to handle the HTTPGet hook request. 03/07/23 03:52:38.199 + Mar 7 03:52:38.205: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-3146" to be "running and ready" + Mar 7 03:52:38.208: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637976ms + Mar 7 03:52:38.208: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:52:40.212: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.006990803s + Mar 7 03:52:40.212: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Mar 7 03:52:40.212: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:130 + STEP: create the pod with lifecycle hook 03/07/23 03:52:40.214 + Mar 7 03:52:40.219: INFO: Waiting up to 5m0s for pod "pod-with-poststart-http-hook" in namespace "container-lifecycle-hook-3146" to be "running and ready" + Mar 7 03:52:40.221: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319186ms + Mar 7 03:52:40.221: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:52:42.224: INFO: Pod "pod-with-poststart-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.005213836s + Mar 7 03:52:42.224: INFO: The phase of Pod pod-with-poststart-http-hook is Running (Ready = true) + Mar 7 03:52:42.224: INFO: Pod "pod-with-poststart-http-hook" satisfied condition "running and ready" + STEP: check poststart hook 03/07/23 03:52:42.226 + STEP: delete the pod with lifecycle hook 03/07/23 03:52:42.239 + Mar 7 03:52:42.244: INFO: Waiting for pod pod-with-poststart-http-hook to disappear + Mar 7 03:52:42.250: INFO: Pod pod-with-poststart-http-hook still exists + Mar 7 03:52:44.251: INFO: Waiting for pod pod-with-poststart-http-hook to disappear + Mar 7 03:52:44.254: INFO: Pod pod-with-poststart-http-hook no longer exists + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/framework.go:187 + Mar 7 03:52:44.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-lifecycle-hook-3146" for this suite. 03/07/23 03:52:44.257 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:260 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:44.262 +Mar 7 03:52:44.262: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename projected 03/07/23 03:52:44.264 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:44.276 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:44.28 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:260 +STEP: Creating a pod to test downward API volume plugin 03/07/23 03:52:44.282 +Mar 7 03:52:44.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc" in namespace "projected-7745" to be "Succeeded or Failed" +Mar 7 03:52:44.292: INFO: Pod "downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.933669ms +Mar 7 03:52:46.295: INFO: Pod "downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006189854s +Mar 7 03:52:48.295: INFO: Pod "downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005861912s +STEP: Saw pod success 03/07/23 03:52:48.295 +Mar 7 03:52:48.295: INFO: Pod "downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc" satisfied condition "Succeeded or Failed" +Mar 7 03:52:48.297: INFO: Trying to get logs from node node-2 pod downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc container client-container: +STEP: delete the pod 03/07/23 03:52:48.302 +Mar 7 03:52:48.309: INFO: Waiting for pod downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc to disappear +Mar 7 03:52:48.311: INFO: Pod downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 +Mar 7 03:52:48.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7745" for this suite. 03/07/23 03:52:48.314 +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","completed":324,"skipped":6037,"failed":0} +------------------------------ +• [4.055 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:260 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:44.262 + Mar 7 03:52:44.262: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename projected 03/07/23 03:52:44.264 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:44.276 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:44.28 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:43 + [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:260 + STEP: Creating a pod to test downward API volume plugin 03/07/23 03:52:44.282 + Mar 7 03:52:44.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc" in namespace "projected-7745" to be "Succeeded or Failed" + Mar 7 03:52:44.292: INFO: Pod "downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.933669ms + Mar 7 03:52:46.295: INFO: Pod "downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006189854s + Mar 7 03:52:48.295: INFO: Pod "downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005861912s + STEP: Saw pod success 03/07/23 03:52:48.295 + Mar 7 03:52:48.295: INFO: Pod "downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc" satisfied condition "Succeeded or Failed" + Mar 7 03:52:48.297: INFO: Trying to get logs from node node-2 pod downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc container client-container: + STEP: delete the pod 03/07/23 03:52:48.302 + Mar 7 03:52:48.309: INFO: Waiting for pod downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc to disappear + Mar 7 03:52:48.311: INFO: Pod downwardapi-volume-47b2f342-795a-41fd-8b65-3cdc93092dfc no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/framework.go:187 + Mar 7 03:52:48.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "projected-7745" for this suite. 03/07/23 03:52:48.314 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:48.318 +Mar 7 03:52:48.318: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename custom-resource-definition 03/07/23 03:52:48.319 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:48.333 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:48.335 +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 +Mar 7 03:52:48.336: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:52:48.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-5829" for this suite. 03/07/23 03:52:48.868 +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","completed":325,"skipped":6042,"failed":0} +------------------------------ +• [0.556 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:48.318 + Mar 7 03:52:48.318: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename custom-resource-definition 03/07/23 03:52:48.319 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:48.333 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:48.335 + [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 + Mar 7 03:52:48.336: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:52:48.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "custom-resource-definition-5829" for this suite. 03/07/23 03:52:48.868 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:136 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:48.875 +Mar 7 03:52:48.875: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:52:48.877 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:48.891 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:48.893 +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:136 +STEP: Creating a pod to test emptydir 0666 on tmpfs 03/07/23 03:52:48.895 +Mar 7 03:52:48.901: INFO: Waiting up to 5m0s for pod "pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87" in namespace "emptydir-3397" to be "Succeeded or Failed" +Mar 7 03:52:48.904: INFO: Pod "pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657985ms +Mar 7 03:52:50.909: INFO: Pod "pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007816855s +Mar 7 03:52:52.908: INFO: Pod "pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00696571s +STEP: Saw pod success 03/07/23 03:52:52.908 +Mar 7 03:52:52.908: INFO: Pod "pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87" satisfied condition "Succeeded or Failed" +Mar 7 03:52:52.910: INFO: Trying to get logs from node node-2 pod pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87 container test-container: +STEP: delete the pod 03/07/23 03:52:52.915 +Mar 7 03:52:52.922: INFO: Waiting for pod pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87 to disappear +Mar 7 03:52:52.924: INFO: Pod pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:52:52.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3397" for this suite. 03/07/23 03:52:52.927 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":326,"skipped":6067,"failed":0} +------------------------------ +• [4.056 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:136 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:48.875 + Mar 7 03:52:48.875: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:52:48.877 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:48.891 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:48.893 + [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:136 + STEP: Creating a pod to test emptydir 0666 on tmpfs 03/07/23 03:52:48.895 + Mar 7 03:52:48.901: INFO: Waiting up to 5m0s for pod "pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87" in namespace "emptydir-3397" to be "Succeeded or Failed" + Mar 7 03:52:48.904: INFO: Pod "pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657985ms + Mar 7 03:52:50.909: INFO: Pod "pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007816855s + Mar 7 03:52:52.908: INFO: Pod "pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00696571s + STEP: Saw pod success 03/07/23 03:52:52.908 + Mar 7 03:52:52.908: INFO: Pod "pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87" satisfied condition "Succeeded or Failed" + Mar 7 03:52:52.910: INFO: Trying to get logs from node node-2 pod pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87 container test-container: + STEP: delete the pod 03/07/23 03:52:52.915 + Mar 7 03:52:52.922: INFO: Waiting for pod pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87 to disappear + Mar 7 03:52:52.924: INFO: Pod pod-300fc8a9-5bdf-4d3b-a35c-3952d755ff87 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:52:52.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-3397" for this suite. 03/07/23 03:52:52.927 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:975 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:52:52.932 +Mar 7 03:52:52.932: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename statefulset 03/07/23 03:52:52.933 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:52.949 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:52.95 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-2436 03/07/23 03:52:52.952 +[It] should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:975 +STEP: Creating statefulset ss in namespace statefulset-2436 03/07/23 03:52:52.96 +Mar 7 03:52:52.966: INFO: Found 0 stateful pods, waiting for 1 +Mar 7 03:53:02.971: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label 03/07/23 03:53:02.975 +STEP: Getting /status 03/07/23 03:53:02.994 +Mar 7 03:53:02.998: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status 03/07/23 03:53:02.998 +Mar 7 03:53:03.004: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated 03/07/23 03:53:03.004 +Mar 7 03:53:03.005: INFO: Observed &StatefulSet event: ADDED +Mar 7 03:53:03.006: INFO: Found Statefulset ss in namespace statefulset-2436 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Mar 7 03:53:03.006: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status 03/07/23 03:53:03.006 +Mar 7 03:53:03.006: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Mar 7 03:53:03.011: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched 03/07/23 03:53:03.011 +Mar 7 03:53:03.012: INFO: Observed &StatefulSet event: ADDED +Mar 7 03:53:03.012: INFO: Observed Statefulset ss in namespace statefulset-2436 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Mar 7 03:53:03.012: INFO: Observed &StatefulSet event: MODIFIED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Mar 7 03:53:03.012: INFO: Deleting all statefulset in ns statefulset-2436 +Mar 7 03:53:03.015: INFO: Scaling statefulset ss to 0 +Mar 7 03:53:13.030: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 03:53:13.032: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +Mar 7 03:53:13.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-2436" for this suite. 03/07/23 03:53:13.046 +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","completed":327,"skipped":6094,"failed":0} +------------------------------ +• [SLOW TEST] [20.120 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:975 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:52:52.932 + Mar 7 03:52:52.932: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename statefulset 03/07/23 03:52:52.933 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:52:52.949 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:52:52.95 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 + STEP: Creating service test in namespace statefulset-2436 03/07/23 03:52:52.952 + [It] should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:975 + STEP: Creating statefulset ss in namespace statefulset-2436 03/07/23 03:52:52.96 + Mar 7 03:52:52.966: INFO: Found 0 stateful pods, waiting for 1 + Mar 7 03:53:02.971: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Patch Statefulset to include a label 03/07/23 03:53:02.975 + STEP: Getting /status 03/07/23 03:53:02.994 + Mar 7 03:53:02.998: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) + STEP: updating the StatefulSet Status 03/07/23 03:53:02.998 + Mar 7 03:53:03.004: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the statefulset status to be updated 03/07/23 03:53:03.004 + Mar 7 03:53:03.005: INFO: Observed &StatefulSet event: ADDED + Mar 7 03:53:03.006: INFO: Found Statefulset ss in namespace statefulset-2436 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Mar 7 03:53:03.006: INFO: Statefulset ss has an updated status + STEP: patching the Statefulset Status 03/07/23 03:53:03.006 + Mar 7 03:53:03.006: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Mar 7 03:53:03.011: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Statefulset status to be patched 03/07/23 03:53:03.011 + Mar 7 03:53:03.012: INFO: Observed &StatefulSet event: ADDED + Mar 7 03:53:03.012: INFO: Observed Statefulset ss in namespace statefulset-2436 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Mar 7 03:53:03.012: INFO: Observed &StatefulSet event: MODIFIED + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 + Mar 7 03:53:03.012: INFO: Deleting all statefulset in ns statefulset-2436 + Mar 7 03:53:03.015: INFO: Scaling statefulset ss to 0 + Mar 7 03:53:13.030: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 03:53:13.032: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 + Mar 7 03:53:13.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "statefulset-2436" for this suite. 03/07/23 03:53:13.046 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2173 +[BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:53:13.052 +Mar 7 03:53:13.052: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename services 03/07/23 03:53:13.053 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:53:13.067 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:53:13.07 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2173 +STEP: creating service in namespace services-5928 03/07/23 03:53:13.071 +Mar 7 03:53:13.078: INFO: Waiting up to 5m0s for pod "kube-proxy-mode-detector" in namespace "services-5928" to be "running and ready" +Mar 7 03:53:13.081: INFO: Pod "kube-proxy-mode-detector": Phase="Pending", Reason="", readiness=false. Elapsed: 2.615338ms +Mar 7 03:53:13.081: INFO: The phase of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:53:15.085: INFO: Pod "kube-proxy-mode-detector": Phase="Running", Reason="", readiness=true. Elapsed: 2.006762338s +Mar 7 03:53:15.085: INFO: The phase of Pod kube-proxy-mode-detector is Running (Ready = true) +Mar 7 03:53:15.085: INFO: Pod "kube-proxy-mode-detector" satisfied condition "running and ready" +Mar 7 03:53:15.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Mar 7 03:53:15.265: INFO: rc: 7 +Mar 7 03:53:15.274: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Mar 7 03:53:15.277: INFO: Pod kube-proxy-mode-detector no longer exists +Mar 7 03:53:15.277: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode: +Command stdout: + +stderr: ++ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode +command terminated with exit code 7 + +error: +exit status 7 +STEP: creating service affinity-clusterip-timeout in namespace services-5928 03/07/23 03:53:15.277 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-5928 03/07/23 03:53:15.289 +I0307 03:53:15.296184 22 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5928, replica count: 3 +I0307 03:53:18.349521 22 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 03:53:18.353: INFO: Creating new exec pod +Mar 7 03:53:18.356: INFO: Waiting up to 5m0s for pod "execpod-affinityvgt2f" in namespace "services-5928" to be "running" +Mar 7 03:53:18.359: INFO: Pod "execpod-affinityvgt2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506742ms +Mar 7 03:53:20.363: INFO: Pod "execpod-affinityvgt2f": Phase="Running", Reason="", readiness=true. Elapsed: 2.006336869s +Mar 7 03:53:20.363: INFO: Pod "execpod-affinityvgt2f" satisfied condition "running" +Mar 7 03:53:21.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' +Mar 7 03:53:21.570: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Mar 7 03:53:21.570: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:53:21.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.102.137.15 80' +Mar 7 03:53:21.754: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.102.137.15 80\nConnection to 10.102.137.15 80 port [tcp/http] succeeded!\n" +Mar 7 03:53:21.754: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Mar 7 03:53:21.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.137.15:80/ ; done' +Mar 7 03:53:21.985: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n" +Mar 7 03:53:21.985: INFO: stdout: "\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7" +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 +Mar 7 03:53:21.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.137.15:80/' +Mar 7 03:53:22.160: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n" +Mar 7 03:53:22.160: INFO: stdout: "affinity-clusterip-timeout-x4th7" +Mar 7 03:53:42.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.137.15:80/' +Mar 7 03:53:42.354: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n" +Mar 7 03:53:42.354: INFO: stdout: "affinity-clusterip-timeout-x4th7" +Mar 7 03:54:02.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.137.15:80/' +Mar 7 03:54:02.541: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n" +Mar 7 03:54:02.541: INFO: stdout: "affinity-clusterip-timeout-xf4j4" +Mar 7 03:54:02.541: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-5928, will wait for the garbage collector to delete the pods 03/07/23 03:54:02.555 +Mar 7 03:54:02.620: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.29714ms +Mar 7 03:54:02.720: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.369807ms +[AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 +Mar 7 03:54:05.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5928" for this suite. 03/07/23 03:54:05.042 +[AfterEach] [sig-network] Services + test/e2e/network/service.go:762 +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","completed":328,"skipped":6096,"failed":0} +------------------------------ +• [SLOW TEST] [51.996 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2173 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:53:13.052 + Mar 7 03:53:13.052: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename services 03/07/23 03:53:13.053 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:53:13.067 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:53:13.07 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:758 + [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2173 + STEP: creating service in namespace services-5928 03/07/23 03:53:13.071 + Mar 7 03:53:13.078: INFO: Waiting up to 5m0s for pod "kube-proxy-mode-detector" in namespace "services-5928" to be "running and ready" + Mar 7 03:53:13.081: INFO: Pod "kube-proxy-mode-detector": Phase="Pending", Reason="", readiness=false. Elapsed: 2.615338ms + Mar 7 03:53:13.081: INFO: The phase of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:53:15.085: INFO: Pod "kube-proxy-mode-detector": Phase="Running", Reason="", readiness=true. Elapsed: 2.006762338s + Mar 7 03:53:15.085: INFO: The phase of Pod kube-proxy-mode-detector is Running (Ready = true) + Mar 7 03:53:15.085: INFO: Pod "kube-proxy-mode-detector" satisfied condition "running and ready" + Mar 7 03:53:15.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' + Mar 7 03:53:15.265: INFO: rc: 7 + Mar 7 03:53:15.274: INFO: Waiting for pod kube-proxy-mode-detector to disappear + Mar 7 03:53:15.277: INFO: Pod kube-proxy-mode-detector no longer exists + Mar 7 03:53:15.277: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode: + Command stdout: + + stderr: + + curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode + command terminated with exit code 7 + + error: + exit status 7 + STEP: creating service affinity-clusterip-timeout in namespace services-5928 03/07/23 03:53:15.277 + STEP: creating replication controller affinity-clusterip-timeout in namespace services-5928 03/07/23 03:53:15.289 + I0307 03:53:15.296184 22 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5928, replica count: 3 + I0307 03:53:18.349521 22 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 03:53:18.353: INFO: Creating new exec pod + Mar 7 03:53:18.356: INFO: Waiting up to 5m0s for pod "execpod-affinityvgt2f" in namespace "services-5928" to be "running" + Mar 7 03:53:18.359: INFO: Pod "execpod-affinityvgt2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506742ms + Mar 7 03:53:20.363: INFO: Pod "execpod-affinityvgt2f": Phase="Running", Reason="", readiness=true. Elapsed: 2.006336869s + Mar 7 03:53:20.363: INFO: Pod "execpod-affinityvgt2f" satisfied condition "running" + Mar 7 03:53:21.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' + Mar 7 03:53:21.570: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" + Mar 7 03:53:21.570: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:53:21.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.102.137.15 80' + Mar 7 03:53:21.754: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.102.137.15 80\nConnection to 10.102.137.15 80 port [tcp/http] succeeded!\n" + Mar 7 03:53:21.754: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" + Mar 7 03:53:21.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.137.15:80/ ; done' + Mar 7 03:53:21.985: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n" + Mar 7 03:53:21.985: INFO: stdout: "\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7\naffinity-clusterip-timeout-x4th7" + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Received response from host: affinity-clusterip-timeout-x4th7 + Mar 7 03:53:21.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.137.15:80/' + Mar 7 03:53:22.160: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n" + Mar 7 03:53:22.160: INFO: stdout: "affinity-clusterip-timeout-x4th7" + Mar 7 03:53:42.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.137.15:80/' + Mar 7 03:53:42.354: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n" + Mar 7 03:53:42.354: INFO: stdout: "affinity-clusterip-timeout-x4th7" + Mar 7 03:54:02.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=services-5928 exec execpod-affinityvgt2f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.137.15:80/' + Mar 7 03:54:02.541: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.102.137.15:80/\n" + Mar 7 03:54:02.541: INFO: stdout: "affinity-clusterip-timeout-xf4j4" + Mar 7 03:54:02.541: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-5928, will wait for the garbage collector to delete the pods 03/07/23 03:54:02.555 + Mar 7 03:54:02.620: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.29714ms + Mar 7 03:54:02.720: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.369807ms + [AfterEach] [sig-network] Services + test/e2e/framework/framework.go:187 + Mar 7 03:54:05.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "services-5928" for this suite. 03/07/23 03:54:05.042 + [AfterEach] [sig-network] Services + test/e2e/network/service.go:762 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:54:05.048 +Mar 7 03:54:05.048: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename gc 03/07/23 03:54:05.049 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:54:05.063 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:54:05.066 +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 +STEP: create the rc1 03/07/23 03:54:05.071 +STEP: create the rc2 03/07/23 03:54:05.075 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well 03/07/23 03:54:10.093 +STEP: delete the rc simpletest-rc-to-be-deleted 03/07/23 03:54:11.004 +STEP: wait for the rc to be deleted 03/07/23 03:54:11.029 +Mar 7 03:54:16.045: INFO: 73 pods remaining +Mar 7 03:54:16.045: INFO: 73 pods has nil DeletionTimestamp +Mar 7 03:54:16.045: INFO: +STEP: Gathering metrics 03/07/23 03:54:21.041 +Mar 7 03:54:21.056: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" +Mar 7 03:54:21.059: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.673619ms +Mar 7 03:54:21.059: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) +Mar 7 03:54:21.059: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" +E0307 03:54:22.101695 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:22.101695 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:23.126975 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:23.126975 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:24.147616 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:24.147616 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:25.168293 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:25.168293 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:26.189393 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:26.189393 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:28.240783 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:28.240783 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:30.291885 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:30.291885 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:31.315203 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:31.315203 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:32.343307 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:32.343307 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:33.369350 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:33.369350 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:34.390060 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:34.390060 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:35.411693 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:35.411693 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:36.438538 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:36.438538 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:40.536392 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:40.536392 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:41.566433 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:41.566433 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:42.607075 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:42.607075 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:43.334718 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:43.334718 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:45.381953 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:45.381953 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:46.404881 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:46.404881 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:47.428488 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:47.428488 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:48.450949 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:48.450949 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:50.492893 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:50.492893 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:51.519322 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:51.519322 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:52.542829 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:52.542829 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:53.563357 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:53.563357 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:55.358585 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:55.358585 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:56.390085 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:56.390085 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:58.433254 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:58.433254 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:59.459141 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:54:59.459141 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:01.503071 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:01.503071 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:02.530235 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:02.530235 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:03.552977 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:03.552977 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:05.336335 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:05.336335 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:07.380469 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:07.380469 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:08.409450 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:08.409450 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:11.484842 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:11.484842 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:12.510131 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:12.510131 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:13.539342 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:13.539342 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:14.569919 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:14.569919 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:15.600162 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:15.600162 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:17.363048 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:17.363048 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:22.481592 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:22.481592 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:24.523538 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:24.523538 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:25.553894 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:25.553894 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:27.336731 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:27.336731 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:28.357766 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:28.357766 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:29.379599 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:29.379599 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:30.401782 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:30.401782 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:32.447744 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:32.447744 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:33.468887 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:33.468887 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:34.491516 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:34.491516 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:35.515103 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:35.515103 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:36.535622 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:36.535622 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:37.576590 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:37.576590 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:38.601185 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:38.601185 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:39.624880 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:39.624880 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:40.647784 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:40.647784 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:44.740370 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:44.740370 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:45.760554 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:45.760554 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:46.782091 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:46.782091 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:47.804387 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +E0307 03:55:47.804387 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " +Mar 7 03:55:47.804: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. +Mar 7 03:55:47.804: INFO: Deleting pod "simpletest-rc-to-be-deleted-24ds7" in namespace "gc-8924" +Mar 7 03:55:47.815: INFO: Deleting pod "simpletest-rc-to-be-deleted-26mdg" in namespace "gc-8924" +Mar 7 03:55:47.857: INFO: Deleting pod "simpletest-rc-to-be-deleted-29ll5" in namespace "gc-8924" +Mar 7 03:55:47.881: INFO: Deleting pod "simpletest-rc-to-be-deleted-2f2lp" in namespace "gc-8924" +Mar 7 03:55:47.903: INFO: Deleting pod "simpletest-rc-to-be-deleted-2jtcq" in namespace "gc-8924" +Mar 7 03:55:47.917: INFO: Deleting pod "simpletest-rc-to-be-deleted-2zmn9" in namespace "gc-8924" +Mar 7 03:55:47.937: INFO: Deleting pod "simpletest-rc-to-be-deleted-4ckdw" in namespace "gc-8924" +Mar 7 03:55:47.960: INFO: Deleting pod "simpletest-rc-to-be-deleted-4j4lb" in namespace "gc-8924" +Mar 7 03:55:47.992: INFO: Deleting pod "simpletest-rc-to-be-deleted-52rfc" in namespace "gc-8924" +Mar 7 03:55:48.028: INFO: Deleting pod "simpletest-rc-to-be-deleted-5bwhj" in namespace "gc-8924" +Mar 7 03:55:48.051: INFO: Deleting pod "simpletest-rc-to-be-deleted-5f2f7" in namespace "gc-8924" +Mar 7 03:55:48.077: INFO: Deleting pod "simpletest-rc-to-be-deleted-5j7zz" in namespace "gc-8924" +Mar 7 03:55:48.104: INFO: Deleting pod "simpletest-rc-to-be-deleted-5mfpj" in namespace "gc-8924" +Mar 7 03:55:48.153: INFO: Deleting pod "simpletest-rc-to-be-deleted-5pgpj" in namespace "gc-8924" +Mar 7 03:55:48.177: INFO: Deleting pod "simpletest-rc-to-be-deleted-5zslc" in namespace "gc-8924" +Mar 7 03:55:48.221: INFO: Deleting pod "simpletest-rc-to-be-deleted-68hgd" in namespace "gc-8924" +Mar 7 03:55:48.249: INFO: Deleting pod "simpletest-rc-to-be-deleted-68pz8" in namespace "gc-8924" +Mar 7 03:55:48.295: INFO: Deleting pod "simpletest-rc-to-be-deleted-6jx9r" in namespace "gc-8924" +Mar 7 03:55:48.330: INFO: Deleting pod "simpletest-rc-to-be-deleted-6krw8" in namespace "gc-8924" +Mar 7 03:55:48.369: INFO: Deleting pod "simpletest-rc-to-be-deleted-6txfc" in namespace "gc-8924" +Mar 7 03:55:48.429: INFO: Deleting pod "simpletest-rc-to-be-deleted-6vslz" in namespace "gc-8924" +Mar 7 03:55:48.486: INFO: Deleting pod "simpletest-rc-to-be-deleted-6zjnk" in namespace "gc-8924" +Mar 7 03:55:48.521: INFO: Deleting pod "simpletest-rc-to-be-deleted-78tz2" in namespace "gc-8924" +Mar 7 03:55:48.565: INFO: Deleting pod "simpletest-rc-to-be-deleted-7jkgj" in namespace "gc-8924" +Mar 7 03:55:48.596: INFO: Deleting pod "simpletest-rc-to-be-deleted-96ttl" in namespace "gc-8924" +Mar 7 03:55:48.640: INFO: Deleting pod "simpletest-rc-to-be-deleted-98vkn" in namespace "gc-8924" +Mar 7 03:55:48.680: INFO: Deleting pod "simpletest-rc-to-be-deleted-99g54" in namespace "gc-8924" +Mar 7 03:55:48.722: INFO: Deleting pod "simpletest-rc-to-be-deleted-9d44n" in namespace "gc-8924" +Mar 7 03:55:48.740: INFO: Deleting pod "simpletest-rc-to-be-deleted-9g9rr" in namespace "gc-8924" +Mar 7 03:55:48.756: INFO: Deleting pod "simpletest-rc-to-be-deleted-9xd5z" in namespace "gc-8924" +Mar 7 03:55:48.808: INFO: Deleting pod "simpletest-rc-to-be-deleted-b7g4t" in namespace "gc-8924" +Mar 7 03:55:48.842: INFO: Deleting pod "simpletest-rc-to-be-deleted-bcskt" in namespace "gc-8924" +Mar 7 03:55:48.863: INFO: Deleting pod "simpletest-rc-to-be-deleted-bh8gh" in namespace "gc-8924" +Mar 7 03:55:48.878: INFO: Deleting pod "simpletest-rc-to-be-deleted-bjlsh" in namespace "gc-8924" +Mar 7 03:55:48.918: INFO: Deleting pod "simpletest-rc-to-be-deleted-bt6fp" in namespace "gc-8924" +Mar 7 03:55:48.964: INFO: Deleting pod "simpletest-rc-to-be-deleted-bthpb" in namespace "gc-8924" +Mar 7 03:55:48.989: INFO: Deleting pod "simpletest-rc-to-be-deleted-c7gck" in namespace "gc-8924" +Mar 7 03:55:49.030: INFO: Deleting pod "simpletest-rc-to-be-deleted-c97hd" in namespace "gc-8924" +Mar 7 03:55:49.063: INFO: Deleting pod "simpletest-rc-to-be-deleted-cb6xq" in namespace "gc-8924" +Mar 7 03:55:49.107: INFO: Deleting pod "simpletest-rc-to-be-deleted-cgk8w" in namespace "gc-8924" +Mar 7 03:55:49.133: INFO: Deleting pod "simpletest-rc-to-be-deleted-d98c5" in namespace "gc-8924" +Mar 7 03:55:49.155: INFO: Deleting pod "simpletest-rc-to-be-deleted-dhd2d" in namespace "gc-8924" +Mar 7 03:55:49.174: INFO: Deleting pod "simpletest-rc-to-be-deleted-dqtfh" in namespace "gc-8924" +Mar 7 03:55:49.227: INFO: Deleting pod "simpletest-rc-to-be-deleted-dw87l" in namespace "gc-8924" +Mar 7 03:55:49.258: INFO: Deleting pod "simpletest-rc-to-be-deleted-dxqgm" in namespace "gc-8924" +Mar 7 03:55:49.284: INFO: Deleting pod "simpletest-rc-to-be-deleted-fnddw" in namespace "gc-8924" +Mar 7 03:55:49.319: INFO: Deleting pod "simpletest-rc-to-be-deleted-g9bll" in namespace "gc-8924" +Mar 7 03:55:49.343: INFO: Deleting pod "simpletest-rc-to-be-deleted-gcrmp" in namespace "gc-8924" +Mar 7 03:55:49.376: INFO: Deleting pod "simpletest-rc-to-be-deleted-gwwl2" in namespace "gc-8924" +Mar 7 03:55:49.424: INFO: Deleting pod "simpletest-rc-to-be-deleted-hc8d6" in namespace "gc-8924" +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 +Mar 7 03:55:49.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-8924" for this suite. 03/07/23 03:55:49.468 +{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","completed":329,"skipped":6099,"failed":0} +------------------------------ +• [SLOW TEST] [104.433 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:54:05.048 + Mar 7 03:54:05.048: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename gc 03/07/23 03:54:05.049 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:54:05.063 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:54:05.066 + [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 + STEP: create the rc1 03/07/23 03:54:05.071 + STEP: create the rc2 03/07/23 03:54:05.075 + STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well 03/07/23 03:54:10.093 + STEP: delete the rc simpletest-rc-to-be-deleted 03/07/23 03:54:11.004 + STEP: wait for the rc to be deleted 03/07/23 03:54:11.029 + Mar 7 03:54:16.045: INFO: 73 pods remaining + Mar 7 03:54:16.045: INFO: 73 pods has nil DeletionTimestamp + Mar 7 03:54:16.045: INFO: + STEP: Gathering metrics 03/07/23 03:54:21.041 + Mar 7 03:54:21.056: INFO: Waiting up to 5m0s for pod "kube-controller-manager-node-2" in namespace "kube-system" to be "running and ready" + Mar 7 03:54:21.059: INFO: Pod "kube-controller-manager-node-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.673619ms + Mar 7 03:54:21.059: INFO: The phase of Pod kube-controller-manager-node-2 is Running (Ready = true) + Mar 7 03:54:21.059: INFO: Pod "kube-controller-manager-node-2" satisfied condition "running and ready" + E0307 03:54:22.101695 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:23.126975 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:24.147616 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:25.168293 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:26.189393 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:28.240783 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:30.291885 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:31.315203 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:32.343307 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:33.369350 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:34.390060 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:35.411693 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:36.438538 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:40.536392 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:41.566433 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:42.607075 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:43.334718 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:45.381953 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:46.404881 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:47.428488 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:48.450949 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:50.492893 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:51.519322 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:52.542829 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:53.563357 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:55.358585 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:56.390085 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:58.433254 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:54:59.459141 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:01.503071 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:02.530235 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:03.552977 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:05.336335 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:07.380469 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:08.409450 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:11.484842 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:12.510131 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:13.539342 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:14.569919 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:15.600162 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:17.363048 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:22.481592 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:24.523538 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:25.553894 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:27.336731 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:28.357766 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:29.379599 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:30.401782 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:32.447744 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:33.468887 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:34.491516 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:35.515103 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:36.535622 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:37.576590 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:38.601185 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:39.624880 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:40.647784 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:44.740370 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:45.760554 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:46.782091 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + E0307 03:55:47.804387 22 dial.go:124] "an error occurred connecting to the remote port" err="error forwarding port 10257 to pod 3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244, uid : failed to execute portforward in network namespace \"host\": failed to connect to localhost:10257 inside namespace \"3822c00d6079131095c3de207b138eec5a9a1a85cfa005efc9bbae11851a8244\", IPv4: dial tcp4 127.0.0.1:10257: connect: connection refused IPv6 dial tcp6 [::1]:10257: connect: connection refused " + Mar 7 03:55:47.804: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. + Mar 7 03:55:47.804: INFO: Deleting pod "simpletest-rc-to-be-deleted-24ds7" in namespace "gc-8924" + Mar 7 03:55:47.815: INFO: Deleting pod "simpletest-rc-to-be-deleted-26mdg" in namespace "gc-8924" + Mar 7 03:55:47.857: INFO: Deleting pod "simpletest-rc-to-be-deleted-29ll5" in namespace "gc-8924" + Mar 7 03:55:47.881: INFO: Deleting pod "simpletest-rc-to-be-deleted-2f2lp" in namespace "gc-8924" + Mar 7 03:55:47.903: INFO: Deleting pod "simpletest-rc-to-be-deleted-2jtcq" in namespace "gc-8924" + Mar 7 03:55:47.917: INFO: Deleting pod "simpletest-rc-to-be-deleted-2zmn9" in namespace "gc-8924" + Mar 7 03:55:47.937: INFO: Deleting pod "simpletest-rc-to-be-deleted-4ckdw" in namespace "gc-8924" + Mar 7 03:55:47.960: INFO: Deleting pod "simpletest-rc-to-be-deleted-4j4lb" in namespace "gc-8924" + Mar 7 03:55:47.992: INFO: Deleting pod "simpletest-rc-to-be-deleted-52rfc" in namespace "gc-8924" + Mar 7 03:55:48.028: INFO: Deleting pod "simpletest-rc-to-be-deleted-5bwhj" in namespace "gc-8924" + Mar 7 03:55:48.051: INFO: Deleting pod "simpletest-rc-to-be-deleted-5f2f7" in namespace "gc-8924" + Mar 7 03:55:48.077: INFO: Deleting pod "simpletest-rc-to-be-deleted-5j7zz" in namespace "gc-8924" + Mar 7 03:55:48.104: INFO: Deleting pod "simpletest-rc-to-be-deleted-5mfpj" in namespace "gc-8924" + Mar 7 03:55:48.153: INFO: Deleting pod "simpletest-rc-to-be-deleted-5pgpj" in namespace "gc-8924" + Mar 7 03:55:48.177: INFO: Deleting pod "simpletest-rc-to-be-deleted-5zslc" in namespace "gc-8924" + Mar 7 03:55:48.221: INFO: Deleting pod "simpletest-rc-to-be-deleted-68hgd" in namespace "gc-8924" + Mar 7 03:55:48.249: INFO: Deleting pod "simpletest-rc-to-be-deleted-68pz8" in namespace "gc-8924" + Mar 7 03:55:48.295: INFO: Deleting pod "simpletest-rc-to-be-deleted-6jx9r" in namespace "gc-8924" + Mar 7 03:55:48.330: INFO: Deleting pod "simpletest-rc-to-be-deleted-6krw8" in namespace "gc-8924" + Mar 7 03:55:48.369: INFO: Deleting pod "simpletest-rc-to-be-deleted-6txfc" in namespace "gc-8924" + Mar 7 03:55:48.429: INFO: Deleting pod "simpletest-rc-to-be-deleted-6vslz" in namespace "gc-8924" + Mar 7 03:55:48.486: INFO: Deleting pod "simpletest-rc-to-be-deleted-6zjnk" in namespace "gc-8924" + Mar 7 03:55:48.521: INFO: Deleting pod "simpletest-rc-to-be-deleted-78tz2" in namespace "gc-8924" + Mar 7 03:55:48.565: INFO: Deleting pod "simpletest-rc-to-be-deleted-7jkgj" in namespace "gc-8924" + Mar 7 03:55:48.596: INFO: Deleting pod "simpletest-rc-to-be-deleted-96ttl" in namespace "gc-8924" + Mar 7 03:55:48.640: INFO: Deleting pod "simpletest-rc-to-be-deleted-98vkn" in namespace "gc-8924" + Mar 7 03:55:48.680: INFO: Deleting pod "simpletest-rc-to-be-deleted-99g54" in namespace "gc-8924" + Mar 7 03:55:48.722: INFO: Deleting pod "simpletest-rc-to-be-deleted-9d44n" in namespace "gc-8924" + Mar 7 03:55:48.740: INFO: Deleting pod "simpletest-rc-to-be-deleted-9g9rr" in namespace "gc-8924" + Mar 7 03:55:48.756: INFO: Deleting pod "simpletest-rc-to-be-deleted-9xd5z" in namespace "gc-8924" + Mar 7 03:55:48.808: INFO: Deleting pod "simpletest-rc-to-be-deleted-b7g4t" in namespace "gc-8924" + Mar 7 03:55:48.842: INFO: Deleting pod "simpletest-rc-to-be-deleted-bcskt" in namespace "gc-8924" + Mar 7 03:55:48.863: INFO: Deleting pod "simpletest-rc-to-be-deleted-bh8gh" in namespace "gc-8924" + Mar 7 03:55:48.878: INFO: Deleting pod "simpletest-rc-to-be-deleted-bjlsh" in namespace "gc-8924" + Mar 7 03:55:48.918: INFO: Deleting pod "simpletest-rc-to-be-deleted-bt6fp" in namespace "gc-8924" + Mar 7 03:55:48.964: INFO: Deleting pod "simpletest-rc-to-be-deleted-bthpb" in namespace "gc-8924" + Mar 7 03:55:48.989: INFO: Deleting pod "simpletest-rc-to-be-deleted-c7gck" in namespace "gc-8924" + Mar 7 03:55:49.030: INFO: Deleting pod "simpletest-rc-to-be-deleted-c97hd" in namespace "gc-8924" + Mar 7 03:55:49.063: INFO: Deleting pod "simpletest-rc-to-be-deleted-cb6xq" in namespace "gc-8924" + Mar 7 03:55:49.107: INFO: Deleting pod "simpletest-rc-to-be-deleted-cgk8w" in namespace "gc-8924" + Mar 7 03:55:49.133: INFO: Deleting pod "simpletest-rc-to-be-deleted-d98c5" in namespace "gc-8924" + Mar 7 03:55:49.155: INFO: Deleting pod "simpletest-rc-to-be-deleted-dhd2d" in namespace "gc-8924" + Mar 7 03:55:49.174: INFO: Deleting pod "simpletest-rc-to-be-deleted-dqtfh" in namespace "gc-8924" + Mar 7 03:55:49.227: INFO: Deleting pod "simpletest-rc-to-be-deleted-dw87l" in namespace "gc-8924" + Mar 7 03:55:49.258: INFO: Deleting pod "simpletest-rc-to-be-deleted-dxqgm" in namespace "gc-8924" + Mar 7 03:55:49.284: INFO: Deleting pod "simpletest-rc-to-be-deleted-fnddw" in namespace "gc-8924" + Mar 7 03:55:49.319: INFO: Deleting pod "simpletest-rc-to-be-deleted-g9bll" in namespace "gc-8924" + Mar 7 03:55:49.343: INFO: Deleting pod "simpletest-rc-to-be-deleted-gcrmp" in namespace "gc-8924" + Mar 7 03:55:49.376: INFO: Deleting pod "simpletest-rc-to-be-deleted-gwwl2" in namespace "gc-8924" + Mar 7 03:55:49.424: INFO: Deleting pod "simpletest-rc-to-be-deleted-hc8d6" in namespace "gc-8924" + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/framework.go:187 + Mar 7 03:55:49.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "gc-8924" for this suite. 03/07/23 03:55:49.468 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Ephemeral Containers [NodeConformance] + will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:55:49.482 +Mar 7 03:55:49.482: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename ephemeral-containers-test 03/07/23 03:55:49.483 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:55:49.532 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:55:49.536 +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/common/node/ephemeral_containers.go:38 +[It] will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 +STEP: creating a target pod 03/07/23 03:55:49.543 +Mar 7 03:55:49.563: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-4171" to be "running and ready" +Mar 7 03:55:49.574: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 11.094162ms +Mar 7 03:55:49.574: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:55:51.580: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017454199s +Mar 7 03:55:51.580: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:55:53.577: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.014387505s +Mar 7 03:55:53.577: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) +Mar 7 03:55:53.577: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" +STEP: adding an ephemeral container 03/07/23 03:55:53.579 +Mar 7 03:55:53.594: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-4171" to be "container debugger running" +Mar 7 03:55:53.597: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.96427ms +Mar 7 03:55:55.600: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006003213s +Mar 7 03:55:55.600: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "container debugger running" +STEP: checking pod container endpoints 03/07/23 03:55:55.6 +Mar 7 03:55:55.600: INFO: ExecWithOptions {Command:[/bin/echo marco] Namespace:ephemeral-containers-test-4171 PodName:ephemeral-containers-target-pod ContainerName:debugger Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Mar 7 03:55:55.600: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:55:55.600: INFO: ExecWithOptions: Clientset creation +Mar 7 03:55:55.601: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/ephemeral-containers-test-4171/pods/ephemeral-containers-target-pod/exec?command=%2Fbin%2Fecho&command=marco&container=debugger&container=debugger&stderr=true&stdout=true) +Mar 7 03:55:55.668: INFO: Exec stderr: "" +[AfterEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/framework.go:187 +Mar 7 03:55:55.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ephemeral-containers-test-4171" for this suite. 03/07/23 03:55:55.691 +{"msg":"PASSED [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]","completed":330,"skipped":6099,"failed":0} +------------------------------ +• [SLOW TEST] [6.214 seconds] +[sig-node] Ephemeral Containers [NodeConformance] +test/e2e/common/node/framework.go:23 + will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:55:49.482 + Mar 7 03:55:49.482: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename ephemeral-containers-test 03/07/23 03:55:49.483 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:55:49.532 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:55:49.536 + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/common/node/ephemeral_containers.go:38 + [It] will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 + STEP: creating a target pod 03/07/23 03:55:49.543 + Mar 7 03:55:49.563: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-4171" to be "running and ready" + Mar 7 03:55:49.574: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 11.094162ms + Mar 7 03:55:49.574: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:55:51.580: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017454199s + Mar 7 03:55:51.580: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:55:53.577: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.014387505s + Mar 7 03:55:53.577: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) + Mar 7 03:55:53.577: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" + STEP: adding an ephemeral container 03/07/23 03:55:53.579 + Mar 7 03:55:53.594: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-4171" to be "container debugger running" + Mar 7 03:55:53.597: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.96427ms + Mar 7 03:55:55.600: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006003213s + Mar 7 03:55:55.600: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "container debugger running" + STEP: checking pod container endpoints 03/07/23 03:55:55.6 + Mar 7 03:55:55.600: INFO: ExecWithOptions {Command:[/bin/echo marco] Namespace:ephemeral-containers-test-4171 PodName:ephemeral-containers-target-pod ContainerName:debugger Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Mar 7 03:55:55.600: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:55:55.600: INFO: ExecWithOptions: Clientset creation + Mar 7 03:55:55.601: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/ephemeral-containers-test-4171/pods/ephemeral-containers-target-pod/exec?command=%2Fbin%2Fecho&command=marco&container=debugger&container=debugger&stderr=true&stdout=true) + Mar 7 03:55:55.668: INFO: Exec stderr: "" + [AfterEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/framework.go:187 + Mar 7 03:55:55.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "ephemeral-containers-test-4171" for this suite. 03/07/23 03:55:55.691 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:44 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:55:55.697 +Mar 7 03:55:55.698: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:55:55.698 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:55:55.716 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:55:55.719 +[It] should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:44 +STEP: Creating configMap configmap-2868/configmap-test-e1e451f4-1625-4188-a217-56695c0e7206 03/07/23 03:55:55.721 +STEP: Creating a pod to test consume configMaps 03/07/23 03:55:55.724 +Mar 7 03:55:55.730: INFO: Waiting up to 5m0s for pod "pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db" in namespace "configmap-2868" to be "Succeeded or Failed" +Mar 7 03:55:55.733: INFO: Pod "pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.551059ms +Mar 7 03:55:57.736: INFO: Pod "pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005389544s +Mar 7 03:55:59.737: INFO: Pod "pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006296714s +STEP: Saw pod success 03/07/23 03:55:59.737 +Mar 7 03:55:59.737: INFO: Pod "pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db" satisfied condition "Succeeded or Failed" +Mar 7 03:55:59.739: INFO: Trying to get logs from node node-2 pod pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db container env-test: +STEP: delete the pod 03/07/23 03:55:59.744 +Mar 7 03:55:59.754: INFO: Waiting for pod pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db to disappear +Mar 7 03:55:59.756: INFO: Pod pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db no longer exists +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:55:59.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2868" for this suite. 03/07/23 03:55:59.759 +{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","completed":331,"skipped":6111,"failed":0} +------------------------------ +• [4.067 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:44 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:55:55.697 + Mar 7 03:55:55.698: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:55:55.698 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:55:55.716 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:55:55.719 + [It] should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:44 + STEP: Creating configMap configmap-2868/configmap-test-e1e451f4-1625-4188-a217-56695c0e7206 03/07/23 03:55:55.721 + STEP: Creating a pod to test consume configMaps 03/07/23 03:55:55.724 + Mar 7 03:55:55.730: INFO: Waiting up to 5m0s for pod "pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db" in namespace "configmap-2868" to be "Succeeded or Failed" + Mar 7 03:55:55.733: INFO: Pod "pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.551059ms + Mar 7 03:55:57.736: INFO: Pod "pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005389544s + Mar 7 03:55:59.737: INFO: Pod "pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006296714s + STEP: Saw pod success 03/07/23 03:55:59.737 + Mar 7 03:55:59.737: INFO: Pod "pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db" satisfied condition "Succeeded or Failed" + Mar 7 03:55:59.739: INFO: Trying to get logs from node node-2 pod pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db container env-test: + STEP: delete the pod 03/07/23 03:55:59.744 + Mar 7 03:55:59.754: INFO: Waiting for pod pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db to disappear + Mar 7 03:55:59.756: INFO: Pod pod-configmaps-467ff422-47ec-4592-b256-d1b84b3db2db no longer exists + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:55:59.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-2868" for this suite. 03/07/23 03:55:59.759 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:96 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:55:59.764 +Mar 7 03:55:59.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:55:59.765 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:55:59.777 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:55:59.779 +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:96 +STEP: Creating a pod to test emptydir 0644 on tmpfs 03/07/23 03:55:59.781 +Mar 7 03:55:59.788: INFO: Waiting up to 5m0s for pod "pod-26969c68-aee9-410d-b0db-d02dae82d642" in namespace "emptydir-5272" to be "Succeeded or Failed" +Mar 7 03:55:59.791: INFO: Pod "pod-26969c68-aee9-410d-b0db-d02dae82d642": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379106ms +Mar 7 03:56:01.795: INFO: Pod "pod-26969c68-aee9-410d-b0db-d02dae82d642": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006423058s +Mar 7 03:56:03.794: INFO: Pod "pod-26969c68-aee9-410d-b0db-d02dae82d642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006185264s +STEP: Saw pod success 03/07/23 03:56:03.794 +Mar 7 03:56:03.794: INFO: Pod "pod-26969c68-aee9-410d-b0db-d02dae82d642" satisfied condition "Succeeded or Failed" +Mar 7 03:56:03.797: INFO: Trying to get logs from node node-2 pod pod-26969c68-aee9-410d-b0db-d02dae82d642 container test-container: +STEP: delete the pod 03/07/23 03:56:03.801 +Mar 7 03:56:03.809: INFO: Waiting for pod pod-26969c68-aee9-410d-b0db-d02dae82d642 to disappear +Mar 7 03:56:03.811: INFO: Pod pod-26969c68-aee9-410d-b0db-d02dae82d642 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:56:03.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5272" for this suite. 03/07/23 03:56:03.815 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":332,"skipped":6112,"failed":0} +------------------------------ +• [4.054 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:96 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:55:59.764 + Mar 7 03:55:59.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:55:59.765 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:55:59.777 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:55:59.779 + [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:96 + STEP: Creating a pod to test emptydir 0644 on tmpfs 03/07/23 03:55:59.781 + Mar 7 03:55:59.788: INFO: Waiting up to 5m0s for pod "pod-26969c68-aee9-410d-b0db-d02dae82d642" in namespace "emptydir-5272" to be "Succeeded or Failed" + Mar 7 03:55:59.791: INFO: Pod "pod-26969c68-aee9-410d-b0db-d02dae82d642": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379106ms + Mar 7 03:56:01.795: INFO: Pod "pod-26969c68-aee9-410d-b0db-d02dae82d642": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006423058s + Mar 7 03:56:03.794: INFO: Pod "pod-26969c68-aee9-410d-b0db-d02dae82d642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006185264s + STEP: Saw pod success 03/07/23 03:56:03.794 + Mar 7 03:56:03.794: INFO: Pod "pod-26969c68-aee9-410d-b0db-d02dae82d642" satisfied condition "Succeeded or Failed" + Mar 7 03:56:03.797: INFO: Trying to get logs from node node-2 pod pod-26969c68-aee9-410d-b0db-d02dae82d642 container test-container: + STEP: delete the pod 03/07/23 03:56:03.801 + Mar 7 03:56:03.809: INFO: Waiting for pod pod-26969c68-aee9-410d-b0db-d02dae82d642 to disappear + Mar 7 03:56:03.811: INFO: Pod pod-26969c68-aee9-410d-b0db-d02dae82d642 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:56:03.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-5272" for this suite. 03/07/23 03:56:03.815 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:56:03.819 +Mar 7 03:56:03.819: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename watch 03/07/23 03:56:03.82 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:56:03.836 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:56:03.841 +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 +STEP: creating a watch on configmaps 03/07/23 03:56:03.843 +STEP: creating a new configmap 03/07/23 03:56:03.844 +STEP: modifying the configmap once 03/07/23 03:56:03.848 +STEP: closing the watch once it receives two notifications 03/07/23 03:56:03.855 +Mar 7 03:56:03.856: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8173 e81b6699-31b6-4d51-a1e1-138467af4b8b 78687 0 2023-03-07 03:56:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-07 03:56:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:56:03.856: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8173 e81b6699-31b6-4d51-a1e1-138467af4b8b 78688 0 2023-03-07 03:56:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-07 03:56:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed 03/07/23 03:56:03.856 +STEP: creating a new watch on configmaps from the last resource version observed by the first watch 03/07/23 03:56:03.862 +STEP: deleting the configmap 03/07/23 03:56:03.863 +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed 03/07/23 03:56:03.867 +Mar 7 03:56:03.867: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8173 e81b6699-31b6-4d51-a1e1-138467af4b8b 78689 0 2023-03-07 03:56:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-07 03:56:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Mar 7 03:56:03.867: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8173 e81b6699-31b6-4d51-a1e1-138467af4b8b 78690 0 2023-03-07 03:56:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-07 03:56:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 +Mar 7 03:56:03.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8173" for this suite. 03/07/23 03:56:03.87 +{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","completed":333,"skipped":6119,"failed":0} +------------------------------ +• [0.056 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:56:03.819 + Mar 7 03:56:03.819: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename watch 03/07/23 03:56:03.82 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:56:03.836 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:56:03.841 + [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 + STEP: creating a watch on configmaps 03/07/23 03:56:03.843 + STEP: creating a new configmap 03/07/23 03:56:03.844 + STEP: modifying the configmap once 03/07/23 03:56:03.848 + STEP: closing the watch once it receives two notifications 03/07/23 03:56:03.855 + Mar 7 03:56:03.856: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8173 e81b6699-31b6-4d51-a1e1-138467af4b8b 78687 0 2023-03-07 03:56:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-07 03:56:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:56:03.856: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8173 e81b6699-31b6-4d51-a1e1-138467af4b8b 78688 0 2023-03-07 03:56:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-07 03:56:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying the configmap a second time, while the watch is closed 03/07/23 03:56:03.856 + STEP: creating a new watch on configmaps from the last resource version observed by the first watch 03/07/23 03:56:03.862 + STEP: deleting the configmap 03/07/23 03:56:03.863 + STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed 03/07/23 03:56:03.867 + Mar 7 03:56:03.867: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8173 e81b6699-31b6-4d51-a1e1-138467af4b8b 78689 0 2023-03-07 03:56:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-07 03:56:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Mar 7 03:56:03.867: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8173 e81b6699-31b6-4d51-a1e1-138467af4b8b 78690 0 2023-03-07 03:56:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-07 03:56:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/framework.go:187 + Mar 7 03:56:03.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "watch-8173" for this suite. 03/07/23 03:56:03.87 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:56:03.876 +Mar 7 03:56:03.876: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replicaset 03/07/23 03:56:03.877 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:56:03.89 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:56:03.893 +[It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 +Mar 7 03:56:03.894: INFO: Creating ReplicaSet my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e +Mar 7 03:56:03.901: INFO: Pod name my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e: Found 0 pods out of 1 +Mar 7 03:56:08.906: INFO: Pod name my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e: Found 1 pods out of 1 +Mar 7 03:56:08.906: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e" is running +Mar 7 03:56:08.906: INFO: Waiting up to 5m0s for pod "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68" in namespace "replicaset-4600" to be "running" +Mar 7 03:56:08.909: INFO: Pod "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68": Phase="Running", Reason="", readiness=true. Elapsed: 3.335446ms +Mar 7 03:56:08.909: INFO: Pod "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68" satisfied condition "running" +Mar 7 03:56:08.909: INFO: Pod "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:56:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:56:05 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:56:05 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:56:03 +0000 UTC Reason: Message:}]) +Mar 7 03:56:08.909: INFO: Trying to dial the pod +Mar 7 03:56:13.919: INFO: Controller my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e: Got expected result from replica 1 [my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68]: "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +Mar 7 03:56:13.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-4600" for this suite. 03/07/23 03:56:13.922 +{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","completed":334,"skipped":6138,"failed":0} +------------------------------ +• [SLOW TEST] [10.051 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:56:03.876 + Mar 7 03:56:03.876: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replicaset 03/07/23 03:56:03.877 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:56:03.89 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:56:03.893 + [It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 + Mar 7 03:56:03.894: INFO: Creating ReplicaSet my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e + Mar 7 03:56:03.901: INFO: Pod name my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e: Found 0 pods out of 1 + Mar 7 03:56:08.906: INFO: Pod name my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e: Found 1 pods out of 1 + Mar 7 03:56:08.906: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e" is running + Mar 7 03:56:08.906: INFO: Waiting up to 5m0s for pod "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68" in namespace "replicaset-4600" to be "running" + Mar 7 03:56:08.909: INFO: Pod "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68": Phase="Running", Reason="", readiness=true. Elapsed: 3.335446ms + Mar 7 03:56:08.909: INFO: Pod "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68" satisfied condition "running" + Mar 7 03:56:08.909: INFO: Pod "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:56:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:56:05 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:56:05 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-07 03:56:03 +0000 UTC Reason: Message:}]) + Mar 7 03:56:08.909: INFO: Trying to dial the pod + Mar 7 03:56:13.919: INFO: Controller my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e: Got expected result from replica 1 [my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68]: "my-hostname-basic-eab71ceb-5c5f-4005-98b3-f20a7052a05e-ghv68", 1 of 1 required successes so far + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 + Mar 7 03:56:13.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replicaset-4600" for this suite. 03/07/23 03:56:13.922 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:166 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:56:13.927 +Mar 7 03:56:13.927: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename emptydir 03/07/23 03:56:13.928 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:56:13.941 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:56:13.943 +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:166 +STEP: Creating a pod to test emptydir 0644 on node default medium 03/07/23 03:56:13.945 +Mar 7 03:56:13.951: INFO: Waiting up to 5m0s for pod "pod-d6f3fdba-618c-4d3e-a517-eed8b1878add" in namespace "emptydir-2333" to be "Succeeded or Failed" +Mar 7 03:56:13.954: INFO: Pod "pod-d6f3fdba-618c-4d3e-a517-eed8b1878add": Phase="Pending", Reason="", readiness=false. Elapsed: 3.201033ms +Mar 7 03:56:15.958: INFO: Pod "pod-d6f3fdba-618c-4d3e-a517-eed8b1878add": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00645875s +Mar 7 03:56:17.958: INFO: Pod "pod-d6f3fdba-618c-4d3e-a517-eed8b1878add": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006903016s +STEP: Saw pod success 03/07/23 03:56:17.958 +Mar 7 03:56:17.958: INFO: Pod "pod-d6f3fdba-618c-4d3e-a517-eed8b1878add" satisfied condition "Succeeded or Failed" +Mar 7 03:56:17.961: INFO: Trying to get logs from node node-2 pod pod-d6f3fdba-618c-4d3e-a517-eed8b1878add container test-container: +STEP: delete the pod 03/07/23 03:56:17.965 +Mar 7 03:56:17.973: INFO: Waiting for pod pod-d6f3fdba-618c-4d3e-a517-eed8b1878add to disappear +Mar 7 03:56:17.975: INFO: Pod pod-d6f3fdba-618c-4d3e-a517-eed8b1878add no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 +Mar 7 03:56:17.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2333" for this suite. 03/07/23 03:56:17.978 +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":335,"skipped":6141,"failed":0} +------------------------------ +• [4.056 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:166 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:56:13.927 + Mar 7 03:56:13.927: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename emptydir 03/07/23 03:56:13.928 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:56:13.941 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:56:13.943 + [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:166 + STEP: Creating a pod to test emptydir 0644 on node default medium 03/07/23 03:56:13.945 + Mar 7 03:56:13.951: INFO: Waiting up to 5m0s for pod "pod-d6f3fdba-618c-4d3e-a517-eed8b1878add" in namespace "emptydir-2333" to be "Succeeded or Failed" + Mar 7 03:56:13.954: INFO: Pod "pod-d6f3fdba-618c-4d3e-a517-eed8b1878add": Phase="Pending", Reason="", readiness=false. Elapsed: 3.201033ms + Mar 7 03:56:15.958: INFO: Pod "pod-d6f3fdba-618c-4d3e-a517-eed8b1878add": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00645875s + Mar 7 03:56:17.958: INFO: Pod "pod-d6f3fdba-618c-4d3e-a517-eed8b1878add": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006903016s + STEP: Saw pod success 03/07/23 03:56:17.958 + Mar 7 03:56:17.958: INFO: Pod "pod-d6f3fdba-618c-4d3e-a517-eed8b1878add" satisfied condition "Succeeded or Failed" + Mar 7 03:56:17.961: INFO: Trying to get logs from node node-2 pod pod-d6f3fdba-618c-4d3e-a517-eed8b1878add container test-container: + STEP: delete the pod 03/07/23 03:56:17.965 + Mar 7 03:56:17.973: INFO: Waiting for pod pod-d6f3fdba-618c-4d3e-a517-eed8b1878add to disappear + Mar 7 03:56:17.975: INFO: Pod pod-d6f3fdba-618c-4d3e-a517-eed8b1878add no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/framework.go:187 + Mar 7 03:56:17.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "emptydir-2333" for this suite. 03/07/23 03:56:17.978 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:585 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:56:17.984 +Mar 7 03:56:17.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename statefulset 03/07/23 03:56:17.985 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:56:17.996 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:56:17.998 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-9111 03/07/23 03:56:18 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:585 +STEP: Initializing watcher for selector baz=blah,foo=bar 03/07/23 03:56:18.004 +STEP: Creating stateful set ss in namespace statefulset-9111 03/07/23 03:56:18.007 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9111 03/07/23 03:56:18.012 +Mar 7 03:56:18.015: INFO: Found 0 stateful pods, waiting for 1 +Mar 7 03:56:28.019: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 03/07/23 03:56:28.019 +Mar 7 03:56:28.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Mar 7 03:56:28.227: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Mar 7 03:56:28.227: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Mar 7 03:56:28.227: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Mar 7 03:56:28.230: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Mar 7 03:56:38.238: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Mar 7 03:56:38.238: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 03:56:38.250: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999811s +Mar 7 03:56:39.254: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996225169s +Mar 7 03:56:40.257: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992718925s +Mar 7 03:56:41.261: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.989333988s +Mar 7 03:56:42.264: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.985796395s +Mar 7 03:56:43.267: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.983094063s +Mar 7 03:56:44.272: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.978477501s +Mar 7 03:56:45.291: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.974027487s +Mar 7 03:56:46.295: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.954685742s +Mar 7 03:56:47.298: INFO: Verifying statefulset ss doesn't scale past 1 for another 951.419313ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9111 03/07/23 03:56:48.298 +Mar 7 03:56:48.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Mar 7 03:56:48.513: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Mar 7 03:56:48.513: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Mar 7 03:56:48.513: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Mar 7 03:56:48.515: INFO: Found 1 stateful pods, waiting for 3 +Mar 7 03:56:58.520: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 03:56:58.520: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 03:56:58.520: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order 03/07/23 03:56:58.52 +STEP: Scale down will halt with unhealthy stateful pod 03/07/23 03:56:58.52 +Mar 7 03:56:58.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Mar 7 03:56:58.707: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Mar 7 03:56:58.707: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Mar 7 03:56:58.707: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Mar 7 03:56:58.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Mar 7 03:56:58.930: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Mar 7 03:56:58.930: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Mar 7 03:56:58.930: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Mar 7 03:56:58.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Mar 7 03:56:59.104: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Mar 7 03:56:59.104: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Mar 7 03:56:59.104: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Mar 7 03:56:59.104: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 03:56:59.107: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Mar 7 03:57:09.113: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Mar 7 03:57:09.113: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Mar 7 03:57:09.113: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Mar 7 03:57:09.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999809s +Mar 7 03:57:10.126: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997159829s +Mar 7 03:57:11.129: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993546283s +Mar 7 03:57:12.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.990092912s +Mar 7 03:57:13.142: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.986388909s +Mar 7 03:57:14.146: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.977196377s +Mar 7 03:57:15.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.973644889s +Mar 7 03:57:16.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.96908004s +Mar 7 03:57:17.157: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.965574099s +Mar 7 03:57:18.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 961.696402ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9111 03/07/23 03:57:19.162 +Mar 7 03:57:19.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Mar 7 03:57:19.352: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Mar 7 03:57:19.352: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Mar 7 03:57:19.352: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Mar 7 03:57:19.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Mar 7 03:57:19.557: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Mar 7 03:57:19.557: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Mar 7 03:57:19.557: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Mar 7 03:57:19.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Mar 7 03:57:19.775: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Mar 7 03:57:19.775: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Mar 7 03:57:19.775: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Mar 7 03:57:19.775: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order 03/07/23 03:57:29.788 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Mar 7 03:57:29.788: INFO: Deleting all statefulset in ns statefulset-9111 +Mar 7 03:57:29.791: INFO: Scaling statefulset ss to 0 +Mar 7 03:57:29.799: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 03:57:29.801: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +Mar 7 03:57:29.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9111" for this suite. 03/07/23 03:57:29.827 +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","completed":336,"skipped":6143,"failed":0} +------------------------------ +• [SLOW TEST] [71.850 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:585 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:56:17.984 + Mar 7 03:56:17.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename statefulset 03/07/23 03:56:17.985 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:56:17.996 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:56:17.998 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 + STEP: Creating service test in namespace statefulset-9111 03/07/23 03:56:18 + [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:585 + STEP: Initializing watcher for selector baz=blah,foo=bar 03/07/23 03:56:18.004 + STEP: Creating stateful set ss in namespace statefulset-9111 03/07/23 03:56:18.007 + STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9111 03/07/23 03:56:18.012 + Mar 7 03:56:18.015: INFO: Found 0 stateful pods, waiting for 1 + Mar 7 03:56:28.019: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 03/07/23 03:56:28.019 + Mar 7 03:56:28.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Mar 7 03:56:28.227: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Mar 7 03:56:28.227: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Mar 7 03:56:28.227: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Mar 7 03:56:28.230: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true + Mar 7 03:56:38.238: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Mar 7 03:56:38.238: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 03:56:38.250: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999811s + Mar 7 03:56:39.254: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996225169s + Mar 7 03:56:40.257: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992718925s + Mar 7 03:56:41.261: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.989333988s + Mar 7 03:56:42.264: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.985796395s + Mar 7 03:56:43.267: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.983094063s + Mar 7 03:56:44.272: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.978477501s + Mar 7 03:56:45.291: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.974027487s + Mar 7 03:56:46.295: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.954685742s + Mar 7 03:56:47.298: INFO: Verifying statefulset ss doesn't scale past 1 for another 951.419313ms + STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9111 03/07/23 03:56:48.298 + Mar 7 03:56:48.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Mar 7 03:56:48.513: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Mar 7 03:56:48.513: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Mar 7 03:56:48.513: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Mar 7 03:56:48.515: INFO: Found 1 stateful pods, waiting for 3 + Mar 7 03:56:58.520: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 03:56:58.520: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 03:56:58.520: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Verifying that stateful set ss was scaled up in order 03/07/23 03:56:58.52 + STEP: Scale down will halt with unhealthy stateful pod 03/07/23 03:56:58.52 + Mar 7 03:56:58.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Mar 7 03:56:58.707: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Mar 7 03:56:58.707: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Mar 7 03:56:58.707: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Mar 7 03:56:58.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Mar 7 03:56:58.930: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Mar 7 03:56:58.930: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Mar 7 03:56:58.930: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Mar 7 03:56:58.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Mar 7 03:56:59.104: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Mar 7 03:56:59.104: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Mar 7 03:56:59.104: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Mar 7 03:56:59.104: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 03:56:59.107: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 + Mar 7 03:57:09.113: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Mar 7 03:57:09.113: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false + Mar 7 03:57:09.113: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false + Mar 7 03:57:09.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999809s + Mar 7 03:57:10.126: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997159829s + Mar 7 03:57:11.129: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993546283s + Mar 7 03:57:12.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.990092912s + Mar 7 03:57:13.142: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.986388909s + Mar 7 03:57:14.146: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.977196377s + Mar 7 03:57:15.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.973644889s + Mar 7 03:57:16.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.96908004s + Mar 7 03:57:17.157: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.965574099s + Mar 7 03:57:18.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 961.696402ms + STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9111 03/07/23 03:57:19.162 + Mar 7 03:57:19.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Mar 7 03:57:19.352: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Mar 7 03:57:19.352: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Mar 7 03:57:19.352: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Mar 7 03:57:19.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Mar 7 03:57:19.557: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Mar 7 03:57:19.557: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Mar 7 03:57:19.557: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Mar 7 03:57:19.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-9111 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Mar 7 03:57:19.775: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Mar 7 03:57:19.775: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Mar 7 03:57:19.775: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Mar 7 03:57:19.775: INFO: Scaling statefulset ss to 0 + STEP: Verifying that stateful set ss was scaled down in reverse order 03/07/23 03:57:29.788 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 + Mar 7 03:57:29.788: INFO: Deleting all statefulset in ns statefulset-9111 + Mar 7 03:57:29.791: INFO: Scaling statefulset ss to 0 + Mar 7 03:57:29.799: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 03:57:29.801: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 + Mar 7 03:57:29.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "statefulset-9111" for this suite. 03/07/23 03:57:29.827 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 +[BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:57:29.835 +Mar 7 03:57:29.835: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename endpointslicemirroring 03/07/23 03:57:29.837 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:57:29.85 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:57:29.853 +[BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/network/endpointslicemirroring.go:41 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 +STEP: mirroring a new custom Endpoint 03/07/23 03:57:29.873 +Mar 7 03:57:29.880: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint 03/07/23 03:57:31.886 +Mar 7 03:57:31.897: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 +STEP: mirroring deletion of a custom Endpoint 03/07/23 03:57:33.9 +Mar 7 03:57:33.940: INFO: Waiting for 0 EndpointSlices to exist, got 1 +[AfterEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/framework.go:187 +Mar 7 03:57:35.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslicemirroring-4027" for this suite. 03/07/23 03:57:35.946 +{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","completed":337,"skipped":6162,"failed":0} +------------------------------ +• [SLOW TEST] [6.144 seconds] +[sig-network] EndpointSliceMirroring +test/e2e/network/common/framework.go:23 + should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:57:29.835 + Mar 7 03:57:29.835: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename endpointslicemirroring 03/07/23 03:57:29.837 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:57:29.85 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:57:29.853 + [BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/network/endpointslicemirroring.go:41 + [It] should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 + STEP: mirroring a new custom Endpoint 03/07/23 03:57:29.873 + Mar 7 03:57:29.880: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 + STEP: mirroring an update to a custom Endpoint 03/07/23 03:57:31.886 + Mar 7 03:57:31.897: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 + STEP: mirroring deletion of a custom Endpoint 03/07/23 03:57:33.9 + Mar 7 03:57:33.940: INFO: Waiting for 0 EndpointSlices to exist, got 1 + [AfterEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/framework.go:187 + Mar 7 03:57:35.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "endpointslicemirroring-4027" for this suite. 03/07/23 03:57:35.946 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:695 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:57:35.979 +Mar 7 03:57:35.979: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename statefulset 03/07/23 03:57:35.98 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:57:35.998 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:57:36.001 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 +STEP: Creating service test in namespace statefulset-4136 03/07/23 03:57:36.003 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:695 +STEP: Creating stateful set ss in namespace statefulset-4136 03/07/23 03:57:36.007 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4136 03/07/23 03:57:36.013 +Mar 7 03:57:36.018: INFO: Found 0 stateful pods, waiting for 1 +Mar 7 03:57:46.021: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 03/07/23 03:57:46.022 +Mar 7 03:57:46.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Mar 7 03:57:46.231: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Mar 7 03:57:46.231: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Mar 7 03:57:46.231: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Mar 7 03:57:46.233: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Mar 7 03:57:56.238: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Mar 7 03:57:56.238: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 03:57:56.251: INFO: POD NODE PHASE GRACE CONDITIONS +Mar 7 03:57:56.251: INFO: ss-0 node-2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC }] +Mar 7 03:57:56.251: INFO: +Mar 7 03:57:56.251: INFO: StatefulSet ss has not reached scale 3, at 1 +Mar 7 03:57:57.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995823881s +Mar 7 03:57:58.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991873655s +Mar 7 03:57:59.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988258041s +Mar 7 03:58:00.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985087319s +Mar 7 03:58:01.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980905411s +Mar 7 03:58:02.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.976984728s +Mar 7 03:58:03.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.973535138s +Mar 7 03:58:04.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.969257157s +Mar 7 03:58:05.287: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.958197ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4136 03/07/23 03:58:06.288 +Mar 7 03:58:06.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Mar 7 03:58:06.488: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Mar 7 03:58:06.488: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Mar 7 03:58:06.488: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Mar 7 03:58:06.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Mar 7 03:58:06.672: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Mar 7 03:58:06.672: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Mar 7 03:58:06.672: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Mar 7 03:58:06.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Mar 7 03:58:06.843: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Mar 7 03:58:06.843: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Mar 7 03:58:06.843: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Mar 7 03:58:06.846: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false +Mar 7 03:58:16.850: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 03:58:16.850: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Mar 7 03:58:16.850: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod 03/07/23 03:58:16.85 +Mar 7 03:58:16.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Mar 7 03:58:17.048: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Mar 7 03:58:17.048: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Mar 7 03:58:17.048: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Mar 7 03:58:17.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Mar 7 03:58:17.234: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Mar 7 03:58:17.234: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Mar 7 03:58:17.234: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Mar 7 03:58:17.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Mar 7 03:58:17.436: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Mar 7 03:58:17.436: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Mar 7 03:58:17.436: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Mar 7 03:58:17.436: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 03:58:17.439: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Mar 7 03:58:27.445: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Mar 7 03:58:27.445: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Mar 7 03:58:27.445: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Mar 7 03:58:27.454: INFO: POD NODE PHASE GRACE CONDITIONS +Mar 7 03:58:27.454: INFO: ss-0 node-2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC }] +Mar 7 03:58:27.454: INFO: ss-1 node-1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC }] +Mar 7 03:58:27.454: INFO: ss-2 bootstrap Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC }] +Mar 7 03:58:27.454: INFO: +Mar 7 03:58:27.454: INFO: StatefulSet ss has not reached scale 0, at 3 +Mar 7 03:58:28.459: INFO: POD NODE PHASE GRACE CONDITIONS +Mar 7 03:58:28.459: INFO: ss-0 node-2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC }] +Mar 7 03:58:28.459: INFO: ss-2 bootstrap Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC }] +Mar 7 03:58:28.459: INFO: +Mar 7 03:58:28.459: INFO: StatefulSet ss has not reached scale 0, at 2 +Mar 7 03:58:29.462: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.992563656s +Mar 7 03:58:30.465: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.989137481s +Mar 7 03:58:31.469: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.98593782s +Mar 7 03:58:32.472: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.982496527s +Mar 7 03:58:33.476: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.978374937s +Mar 7 03:58:34.480: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.974468545s +Mar 7 03:58:35.483: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.97143975s +Mar 7 03:58:36.486: INFO: Verifying statefulset ss doesn't scale past 0 for another 968.21034ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4136 03/07/23 03:58:37.486 +Mar 7 03:58:37.489: INFO: Scaling statefulset ss to 0 +Mar 7 03:58:37.497: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 +Mar 7 03:58:37.499: INFO: Deleting all statefulset in ns statefulset-4136 +Mar 7 03:58:37.501: INFO: Scaling statefulset ss to 0 +Mar 7 03:58:37.508: INFO: Waiting for statefulset status.replicas updated to 0 +Mar 7 03:58:37.510: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 +Mar 7 03:58:37.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4136" for this suite. 03/07/23 03:58:37.529 +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","completed":338,"skipped":6166,"failed":0} +------------------------------ +• [SLOW TEST] [61.556 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:101 + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:695 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:57:35.979 + Mar 7 03:57:35.979: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename statefulset 03/07/23 03:57:35.98 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:57:35.998 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:57:36.001 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:96 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:111 + STEP: Creating service test in namespace statefulset-4136 03/07/23 03:57:36.003 + [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:695 + STEP: Creating stateful set ss in namespace statefulset-4136 03/07/23 03:57:36.007 + STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4136 03/07/23 03:57:36.013 + Mar 7 03:57:36.018: INFO: Found 0 stateful pods, waiting for 1 + Mar 7 03:57:46.021: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 03/07/23 03:57:46.022 + Mar 7 03:57:46.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Mar 7 03:57:46.231: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Mar 7 03:57:46.231: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Mar 7 03:57:46.231: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Mar 7 03:57:46.233: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true + Mar 7 03:57:56.238: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Mar 7 03:57:56.238: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 03:57:56.251: INFO: POD NODE PHASE GRACE CONDITIONS + Mar 7 03:57:56.251: INFO: ss-0 node-2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC }] + Mar 7 03:57:56.251: INFO: + Mar 7 03:57:56.251: INFO: StatefulSet ss has not reached scale 3, at 1 + Mar 7 03:57:57.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995823881s + Mar 7 03:57:58.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991873655s + Mar 7 03:57:59.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988258041s + Mar 7 03:58:00.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985087319s + Mar 7 03:58:01.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980905411s + Mar 7 03:58:02.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.976984728s + Mar 7 03:58:03.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.973535138s + Mar 7 03:58:04.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.969257157s + Mar 7 03:58:05.287: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.958197ms + STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4136 03/07/23 03:58:06.288 + Mar 7 03:58:06.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Mar 7 03:58:06.488: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Mar 7 03:58:06.488: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Mar 7 03:58:06.488: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Mar 7 03:58:06.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Mar 7 03:58:06.672: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" + Mar 7 03:58:06.672: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Mar 7 03:58:06.672: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Mar 7 03:58:06.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Mar 7 03:58:06.843: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" + Mar 7 03:58:06.843: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Mar 7 03:58:06.843: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Mar 7 03:58:06.846: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false + Mar 7 03:58:16.850: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 03:58:16.850: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true + Mar 7 03:58:16.850: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Scale down will not halt with unhealthy stateful pod 03/07/23 03:58:16.85 + Mar 7 03:58:16.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Mar 7 03:58:17.048: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Mar 7 03:58:17.048: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Mar 7 03:58:17.048: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Mar 7 03:58:17.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Mar 7 03:58:17.234: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Mar 7 03:58:17.234: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Mar 7 03:58:17.234: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Mar 7 03:58:17.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1093879902 --namespace=statefulset-4136 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Mar 7 03:58:17.436: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Mar 7 03:58:17.436: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Mar 7 03:58:17.436: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Mar 7 03:58:17.436: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 03:58:17.439: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 + Mar 7 03:58:27.445: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Mar 7 03:58:27.445: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false + Mar 7 03:58:27.445: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false + Mar 7 03:58:27.454: INFO: POD NODE PHASE GRACE CONDITIONS + Mar 7 03:58:27.454: INFO: ss-0 node-2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC }] + Mar 7 03:58:27.454: INFO: ss-1 node-1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC }] + Mar 7 03:58:27.454: INFO: ss-2 bootstrap Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC }] + Mar 7 03:58:27.454: INFO: + Mar 7 03:58:27.454: INFO: StatefulSet ss has not reached scale 0, at 3 + Mar 7 03:58:28.459: INFO: POD NODE PHASE GRACE CONDITIONS + Mar 7 03:58:28.459: INFO: ss-0 node-2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:36 +0000 UTC }] + Mar 7 03:58:28.459: INFO: ss-2 bootstrap Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:58:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-07 03:57:56 +0000 UTC }] + Mar 7 03:58:28.459: INFO: + Mar 7 03:58:28.459: INFO: StatefulSet ss has not reached scale 0, at 2 + Mar 7 03:58:29.462: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.992563656s + Mar 7 03:58:30.465: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.989137481s + Mar 7 03:58:31.469: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.98593782s + Mar 7 03:58:32.472: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.982496527s + Mar 7 03:58:33.476: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.978374937s + Mar 7 03:58:34.480: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.974468545s + Mar 7 03:58:35.483: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.97143975s + Mar 7 03:58:36.486: INFO: Verifying statefulset ss doesn't scale past 0 for another 968.21034ms + STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4136 03/07/23 03:58:37.486 + Mar 7 03:58:37.489: INFO: Scaling statefulset ss to 0 + Mar 7 03:58:37.497: INFO: Waiting for statefulset status.replicas updated to 0 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:122 + Mar 7 03:58:37.499: INFO: Deleting all statefulset in ns statefulset-4136 + Mar 7 03:58:37.501: INFO: Scaling statefulset ss to 0 + Mar 7 03:58:37.508: INFO: Waiting for statefulset status.replicas updated to 0 + Mar 7 03:58:37.510: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/framework.go:187 + Mar 7 03:58:37.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "statefulset-4136" for this suite. 03/07/23 03:58:37.529 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:58:37.536 +Mar 7 03:58:37.536: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename custom-resource-definition 03/07/23 03:58:37.537 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:58:37.564 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:58:37.566 +[It] creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 +Mar 7 03:58:37.567: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:58:38.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-929" for this suite. 03/07/23 03:58:38.599 +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","completed":339,"skipped":6206,"failed":0} +------------------------------ +• [1.068 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:58:37.536 + Mar 7 03:58:37.536: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename custom-resource-definition 03/07/23 03:58:37.537 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:58:37.564 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:58:37.566 + [It] creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 + Mar 7 03:58:37.567: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:58:38.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "custom-resource-definition-929" for this suite. 03/07/23 03:58:38.599 + << End Captured GinkgoWriter Output +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + test/e2e/network/service_latency.go:59 +[BeforeEach] [sig-network] Service endpoints latency + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:58:38.604 +Mar 7 03:58:38.605: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename svc-latency 03/07/23 03:58:38.606 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:58:38.622 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:58:38.624 +[It] should not be very high [Conformance] + test/e2e/network/service_latency.go:59 +Mar 7 03:58:38.625: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-9373 03/07/23 03:58:38.626 +I0307 03:58:38.630638 22 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9373, replica count: 1 +I0307 03:58:39.681532 22 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Mar 7 03:58:39.793: INFO: Created: latency-svc-mb2wz +Mar 7 03:58:39.802: INFO: Got endpoints: latency-svc-mb2wz [20.099598ms] +Mar 7 03:58:39.820: INFO: Created: latency-svc-czs7b +Mar 7 03:58:39.834: INFO: Got endpoints: latency-svc-czs7b [31.158168ms] +Mar 7 03:58:39.837: INFO: Created: latency-svc-jtkqq +Mar 7 03:58:39.842: INFO: Got endpoints: latency-svc-jtkqq [39.011589ms] +Mar 7 03:58:39.852: INFO: Created: latency-svc-n2jlr +Mar 7 03:58:39.861: INFO: Got endpoints: latency-svc-n2jlr [58.413669ms] +Mar 7 03:58:39.878: INFO: Created: latency-svc-cmmdr +Mar 7 03:58:39.886: INFO: Got endpoints: latency-svc-cmmdr [83.075363ms] +Mar 7 03:58:39.941: INFO: Created: latency-svc-kr7bz +Mar 7 03:58:39.978: INFO: Created: latency-svc-m2flz +Mar 7 03:58:39.978: INFO: Got endpoints: latency-svc-m2flz [176.218257ms] +Mar 7 03:58:39.996: INFO: Got endpoints: latency-svc-kr7bz [192.636759ms] +Mar 7 03:58:40.001: INFO: Created: latency-svc-dpsrx +Mar 7 03:58:40.012: INFO: Got endpoints: latency-svc-dpsrx [208.829358ms] +Mar 7 03:58:40.017: INFO: Created: latency-svc-llmdf +Mar 7 03:58:40.024: INFO: Got endpoints: latency-svc-llmdf [221.316519ms] +Mar 7 03:58:40.027: INFO: Created: latency-svc-rflhg +Mar 7 03:58:40.035: INFO: Got endpoints: latency-svc-rflhg [231.453921ms] +Mar 7 03:58:40.039: INFO: Created: latency-svc-876sx +Mar 7 03:58:40.049: INFO: Got endpoints: latency-svc-876sx [245.178097ms] +Mar 7 03:58:40.053: INFO: Created: latency-svc-dmggm +Mar 7 03:58:40.059: INFO: Got endpoints: latency-svc-dmggm [255.250404ms] +Mar 7 03:58:40.064: INFO: Created: latency-svc-zmlff +Mar 7 03:58:40.070: INFO: Got endpoints: latency-svc-zmlff [266.269168ms] +Mar 7 03:58:40.076: INFO: Created: latency-svc-5pfss +Mar 7 03:58:40.086: INFO: Got endpoints: latency-svc-5pfss [281.414232ms] +Mar 7 03:58:40.088: INFO: Created: latency-svc-4drx8 +Mar 7 03:58:40.093: INFO: Got endpoints: latency-svc-4drx8 [288.931582ms] +Mar 7 03:58:40.100: INFO: Created: latency-svc-7nxlv +Mar 7 03:58:40.107: INFO: Got endpoints: latency-svc-7nxlv [302.848391ms] +Mar 7 03:58:40.112: INFO: Created: latency-svc-4qb2r +Mar 7 03:58:40.121: INFO: Created: latency-svc-x8tr8 +Mar 7 03:58:40.122: INFO: Got endpoints: latency-svc-4qb2r [288.34143ms] +Mar 7 03:58:40.128: INFO: Got endpoints: latency-svc-x8tr8 [286.129838ms] +Mar 7 03:58:40.135: INFO: Created: latency-svc-q58r2 +Mar 7 03:58:40.140: INFO: Got endpoints: latency-svc-q58r2 [278.557971ms] +Mar 7 03:58:40.143: INFO: Created: latency-svc-8ftc2 +Mar 7 03:58:40.152: INFO: Got endpoints: latency-svc-8ftc2 [265.987552ms] +Mar 7 03:58:40.157: INFO: Created: latency-svc-t8zp8 +Mar 7 03:58:40.161: INFO: Got endpoints: latency-svc-t8zp8 [182.611034ms] +Mar 7 03:58:40.165: INFO: Created: latency-svc-vkt5s +Mar 7 03:58:40.171: INFO: Got endpoints: latency-svc-vkt5s [175.687866ms] +Mar 7 03:58:40.178: INFO: Created: latency-svc-bctdl +Mar 7 03:58:40.186: INFO: Got endpoints: latency-svc-bctdl [174.424909ms] +Mar 7 03:58:40.191: INFO: Created: latency-svc-pd6mp +Mar 7 03:58:40.195: INFO: Got endpoints: latency-svc-pd6mp [171.094617ms] +Mar 7 03:58:40.203: INFO: Created: latency-svc-7qc8x +Mar 7 03:58:40.212: INFO: Got endpoints: latency-svc-7qc8x [176.759899ms] +Mar 7 03:58:40.216: INFO: Created: latency-svc-4dx7w +Mar 7 03:58:40.220: INFO: Got endpoints: latency-svc-4dx7w [170.931565ms] +Mar 7 03:58:40.227: INFO: Created: latency-svc-6d6h6 +Mar 7 03:58:40.231: INFO: Got endpoints: latency-svc-6d6h6 [172.491067ms] +Mar 7 03:58:40.242: INFO: Created: latency-svc-mvhwk +Mar 7 03:58:40.249: INFO: Got endpoints: latency-svc-mvhwk [179.021303ms] +Mar 7 03:58:40.256: INFO: Created: latency-svc-gc879 +Mar 7 03:58:40.263: INFO: Got endpoints: latency-svc-gc879 [176.973339ms] +Mar 7 03:58:40.265: INFO: Created: latency-svc-m5zvh +Mar 7 03:58:40.272: INFO: Got endpoints: latency-svc-m5zvh [179.619966ms] +Mar 7 03:58:40.276: INFO: Created: latency-svc-qd7w2 +Mar 7 03:58:40.282: INFO: Got endpoints: latency-svc-qd7w2 [174.294979ms] +Mar 7 03:58:40.290: INFO: Created: latency-svc-jg5tf +Mar 7 03:58:40.299: INFO: Got endpoints: latency-svc-jg5tf [177.061987ms] +Mar 7 03:58:40.299: INFO: Created: latency-svc-25x9x +Mar 7 03:58:40.303: INFO: Got endpoints: latency-svc-25x9x [175.026081ms] +Mar 7 03:58:40.315: INFO: Created: latency-svc-6gcsl +Mar 7 03:58:40.319: INFO: Got endpoints: latency-svc-6gcsl [179.210732ms] +Mar 7 03:58:40.321: INFO: Created: latency-svc-nwstb +Mar 7 03:58:40.333: INFO: Got endpoints: latency-svc-nwstb [180.840444ms] +Mar 7 03:58:40.333: INFO: Created: latency-svc-v559c +Mar 7 03:58:40.334: INFO: Got endpoints: latency-svc-v559c [173.12267ms] +Mar 7 03:58:40.341: INFO: Created: latency-svc-qz5zd +Mar 7 03:58:40.349: INFO: Got endpoints: latency-svc-qz5zd [177.66831ms] +Mar 7 03:58:40.353: INFO: Created: latency-svc-p5hlh +Mar 7 03:58:40.359: INFO: Got endpoints: latency-svc-p5hlh [172.015773ms] +Mar 7 03:58:40.360: INFO: Created: latency-svc-tl8zd +Mar 7 03:58:40.366: INFO: Got endpoints: latency-svc-tl8zd [170.759481ms] +Mar 7 03:58:40.371: INFO: Created: latency-svc-jb8hs +Mar 7 03:58:40.379: INFO: Got endpoints: latency-svc-jb8hs [167.184941ms] +Mar 7 03:58:40.384: INFO: Created: latency-svc-68rh5 +Mar 7 03:58:40.389: INFO: Got endpoints: latency-svc-68rh5 [169.438992ms] +Mar 7 03:58:40.392: INFO: Created: latency-svc-ltx56 +Mar 7 03:58:40.402: INFO: Got endpoints: latency-svc-ltx56 [170.514993ms] +Mar 7 03:58:40.404: INFO: Created: latency-svc-wf4nw +Mar 7 03:58:40.416: INFO: Created: latency-svc-2s59g +Mar 7 03:58:40.426: INFO: Created: latency-svc-x587s +Mar 7 03:58:40.434: INFO: Created: latency-svc-5snc6 +Mar 7 03:58:40.445: INFO: Created: latency-svc-qmgbc +Mar 7 03:58:40.452: INFO: Got endpoints: latency-svc-wf4nw [202.635564ms] +Mar 7 03:58:40.461: INFO: Created: latency-svc-8q8vw +Mar 7 03:58:40.472: INFO: Created: latency-svc-5gd4w +Mar 7 03:58:40.481: INFO: Created: latency-svc-7z5nc +Mar 7 03:58:40.490: INFO: Created: latency-svc-8rx5l +Mar 7 03:58:40.500: INFO: Got endpoints: latency-svc-2s59g [237.089393ms] +Mar 7 03:58:40.503: INFO: Created: latency-svc-dnz7s +Mar 7 03:58:40.513: INFO: Created: latency-svc-7sncr +Mar 7 03:58:40.525: INFO: Created: latency-svc-qvzfk +Mar 7 03:58:40.534: INFO: Created: latency-svc-cgjxn +Mar 7 03:58:40.542: INFO: Created: latency-svc-22fjf +Mar 7 03:58:40.549: INFO: Got endpoints: latency-svc-x587s [276.62936ms] +Mar 7 03:58:40.553: INFO: Created: latency-svc-xk24r +Mar 7 03:58:40.561: INFO: Created: latency-svc-qp2kg +Mar 7 03:58:40.571: INFO: Created: latency-svc-45782 +Mar 7 03:58:40.583: INFO: Created: latency-svc-pb6zt +Mar 7 03:58:40.602: INFO: Got endpoints: latency-svc-5snc6 [320.496661ms] +Mar 7 03:58:40.614: INFO: Created: latency-svc-xm5v8 +Mar 7 03:58:40.648: INFO: Got endpoints: latency-svc-qmgbc [349.229337ms] +Mar 7 03:58:40.662: INFO: Created: latency-svc-txrhx +Mar 7 03:58:40.700: INFO: Got endpoints: latency-svc-8q8vw [396.87649ms] +Mar 7 03:58:40.714: INFO: Created: latency-svc-npzk4 +Mar 7 03:58:40.749: INFO: Got endpoints: latency-svc-5gd4w [430.170879ms] +Mar 7 03:58:40.760: INFO: Created: latency-svc-pn8dw +Mar 7 03:58:40.798: INFO: Got endpoints: latency-svc-7z5nc [465.549472ms] +Mar 7 03:58:40.814: INFO: Created: latency-svc-nv4qb +Mar 7 03:58:40.858: INFO: Got endpoints: latency-svc-8rx5l [524.081541ms] +Mar 7 03:58:40.879: INFO: Created: latency-svc-dk5sd +Mar 7 03:58:40.899: INFO: Got endpoints: latency-svc-dnz7s [549.783175ms] +Mar 7 03:58:40.912: INFO: Created: latency-svc-n7csz +Mar 7 03:58:40.950: INFO: Got endpoints: latency-svc-7sncr [591.071492ms] +Mar 7 03:58:40.962: INFO: Created: latency-svc-mffjp +Mar 7 03:58:40.999: INFO: Got endpoints: latency-svc-qvzfk [632.473439ms] +Mar 7 03:58:41.012: INFO: Created: latency-svc-9chxd +Mar 7 03:58:41.050: INFO: Got endpoints: latency-svc-cgjxn [670.724159ms] +Mar 7 03:58:41.061: INFO: Created: latency-svc-2vl4w +Mar 7 03:58:41.101: INFO: Got endpoints: latency-svc-22fjf [711.530654ms] +Mar 7 03:58:41.121: INFO: Created: latency-svc-qd9vs +Mar 7 03:58:41.149: INFO: Got endpoints: latency-svc-xk24r [747.123108ms] +Mar 7 03:58:41.161: INFO: Created: latency-svc-27dn8 +Mar 7 03:58:41.199: INFO: Got endpoints: latency-svc-qp2kg [747.266547ms] +Mar 7 03:58:41.210: INFO: Created: latency-svc-24bkf +Mar 7 03:58:41.250: INFO: Got endpoints: latency-svc-45782 [749.853201ms] +Mar 7 03:58:41.267: INFO: Created: latency-svc-9hddd +Mar 7 03:58:41.300: INFO: Got endpoints: latency-svc-pb6zt [750.437102ms] +Mar 7 03:58:41.313: INFO: Created: latency-svc-qtnmx +Mar 7 03:58:41.348: INFO: Got endpoints: latency-svc-xm5v8 [746.333806ms] +Mar 7 03:58:41.361: INFO: Created: latency-svc-xvzpk +Mar 7 03:58:41.398: INFO: Got endpoints: latency-svc-txrhx [749.197646ms] +Mar 7 03:58:41.410: INFO: Created: latency-svc-9kjwv +Mar 7 03:58:41.448: INFO: Got endpoints: latency-svc-npzk4 [748.01517ms] +Mar 7 03:58:41.468: INFO: Created: latency-svc-8h465 +Mar 7 03:58:41.498: INFO: Got endpoints: latency-svc-pn8dw [749.249433ms] +Mar 7 03:58:41.512: INFO: Created: latency-svc-x8p6z +Mar 7 03:58:41.549: INFO: Got endpoints: latency-svc-nv4qb [751.012673ms] +Mar 7 03:58:41.563: INFO: Created: latency-svc-2tphj +Mar 7 03:58:41.599: INFO: Got endpoints: latency-svc-dk5sd [740.845916ms] +Mar 7 03:58:41.613: INFO: Created: latency-svc-fwmmz +Mar 7 03:58:41.650: INFO: Got endpoints: latency-svc-n7csz [751.183664ms] +Mar 7 03:58:41.662: INFO: Created: latency-svc-h2fsc +Mar 7 03:58:41.701: INFO: Got endpoints: latency-svc-mffjp [751.443507ms] +Mar 7 03:58:41.714: INFO: Created: latency-svc-88rsr +Mar 7 03:58:41.748: INFO: Got endpoints: latency-svc-9chxd [749.591719ms] +Mar 7 03:58:41.761: INFO: Created: latency-svc-smf8t +Mar 7 03:58:41.799: INFO: Got endpoints: latency-svc-2vl4w [749.58895ms] +Mar 7 03:58:41.813: INFO: Created: latency-svc-zmr7q +Mar 7 03:58:41.849: INFO: Got endpoints: latency-svc-qd9vs [748.584214ms] +Mar 7 03:58:41.867: INFO: Created: latency-svc-bx2j4 +Mar 7 03:58:41.900: INFO: Got endpoints: latency-svc-27dn8 [750.559324ms] +Mar 7 03:58:41.913: INFO: Created: latency-svc-vs9rl +Mar 7 03:58:41.949: INFO: Got endpoints: latency-svc-24bkf [750.136532ms] +Mar 7 03:58:41.961: INFO: Created: latency-svc-4g776 +Mar 7 03:58:41.999: INFO: Got endpoints: latency-svc-9hddd [749.18525ms] +Mar 7 03:58:42.013: INFO: Created: latency-svc-4tgdh +Mar 7 03:58:42.050: INFO: Got endpoints: latency-svc-qtnmx [750.128858ms] +Mar 7 03:58:42.062: INFO: Created: latency-svc-kp7g8 +Mar 7 03:58:42.098: INFO: Got endpoints: latency-svc-xvzpk [749.856735ms] +Mar 7 03:58:42.111: INFO: Created: latency-svc-q2f9n +Mar 7 03:58:42.161: INFO: Got endpoints: latency-svc-9kjwv [763.067313ms] +Mar 7 03:58:42.173: INFO: Created: latency-svc-99q56 +Mar 7 03:58:42.199: INFO: Got endpoints: latency-svc-8h465 [750.857541ms] +Mar 7 03:58:42.211: INFO: Created: latency-svc-7cd6k +Mar 7 03:58:42.251: INFO: Got endpoints: latency-svc-x8p6z [752.787601ms] +Mar 7 03:58:42.263: INFO: Created: latency-svc-chbtr +Mar 7 03:58:42.300: INFO: Got endpoints: latency-svc-2tphj [750.292926ms] +Mar 7 03:58:42.313: INFO: Created: latency-svc-b7845 +Mar 7 03:58:42.348: INFO: Got endpoints: latency-svc-fwmmz [748.444824ms] +Mar 7 03:58:42.361: INFO: Created: latency-svc-dkxvh +Mar 7 03:58:42.399: INFO: Got endpoints: latency-svc-h2fsc [748.642331ms] +Mar 7 03:58:42.413: INFO: Created: latency-svc-xjkcc +Mar 7 03:58:42.450: INFO: Got endpoints: latency-svc-88rsr [749.327222ms] +Mar 7 03:58:42.463: INFO: Created: latency-svc-4j9gt +Mar 7 03:58:42.498: INFO: Got endpoints: latency-svc-smf8t [750.077683ms] +Mar 7 03:58:42.511: INFO: Created: latency-svc-mbflk +Mar 7 03:58:42.554: INFO: Got endpoints: latency-svc-zmr7q [754.703853ms] +Mar 7 03:58:42.569: INFO: Created: latency-svc-mz66v +Mar 7 03:58:42.603: INFO: Got endpoints: latency-svc-bx2j4 [753.302944ms] +Mar 7 03:58:42.620: INFO: Created: latency-svc-wzq4l +Mar 7 03:58:42.648: INFO: Got endpoints: latency-svc-vs9rl [748.597223ms] +Mar 7 03:58:42.662: INFO: Created: latency-svc-wjfbj +Mar 7 03:58:42.699: INFO: Got endpoints: latency-svc-4g776 [750.017517ms] +Mar 7 03:58:42.710: INFO: Created: latency-svc-6rqjx +Mar 7 03:58:42.750: INFO: Got endpoints: latency-svc-4tgdh [750.655596ms] +Mar 7 03:58:42.763: INFO: Created: latency-svc-dcmdh +Mar 7 03:58:42.800: INFO: Got endpoints: latency-svc-kp7g8 [750.052264ms] +Mar 7 03:58:42.816: INFO: Created: latency-svc-rxf5g +Mar 7 03:58:42.849: INFO: Got endpoints: latency-svc-q2f9n [750.986577ms] +Mar 7 03:58:42.863: INFO: Created: latency-svc-vdjzf +Mar 7 03:58:42.899: INFO: Got endpoints: latency-svc-99q56 [738.621167ms] +Mar 7 03:58:42.956: INFO: Got endpoints: latency-svc-7cd6k [757.276755ms] +Mar 7 03:58:42.959: INFO: Created: latency-svc-rrgmq +Mar 7 03:58:42.977: INFO: Created: latency-svc-zgpbl +Mar 7 03:58:43.025: INFO: Got endpoints: latency-svc-chbtr [773.358114ms] +Mar 7 03:58:43.043: INFO: Created: latency-svc-sq669 +Mar 7 03:58:43.056: INFO: Got endpoints: latency-svc-b7845 [756.511533ms] +Mar 7 03:58:43.071: INFO: Created: latency-svc-spbjw +Mar 7 03:58:43.100: INFO: Got endpoints: latency-svc-dkxvh [752.456389ms] +Mar 7 03:58:43.118: INFO: Created: latency-svc-bpcm4 +Mar 7 03:58:43.150: INFO: Got endpoints: latency-svc-xjkcc [750.907232ms] +Mar 7 03:58:43.164: INFO: Created: latency-svc-6xg8m +Mar 7 03:58:43.201: INFO: Got endpoints: latency-svc-4j9gt [750.836978ms] +Mar 7 03:58:43.214: INFO: Created: latency-svc-j5h7j +Mar 7 03:58:43.250: INFO: Got endpoints: latency-svc-mbflk [750.994974ms] +Mar 7 03:58:43.263: INFO: Created: latency-svc-pbsdq +Mar 7 03:58:43.300: INFO: Got endpoints: latency-svc-mz66v [746.095325ms] +Mar 7 03:58:43.311: INFO: Created: latency-svc-4sdd9 +Mar 7 03:58:43.350: INFO: Got endpoints: latency-svc-wzq4l [747.729685ms] +Mar 7 03:58:43.364: INFO: Created: latency-svc-wc7ss +Mar 7 03:58:43.399: INFO: Got endpoints: latency-svc-wjfbj [750.91249ms] +Mar 7 03:58:43.411: INFO: Created: latency-svc-p4jfb +Mar 7 03:58:43.448: INFO: Got endpoints: latency-svc-6rqjx [748.821685ms] +Mar 7 03:58:43.460: INFO: Created: latency-svc-ff4jl +Mar 7 03:58:43.549: INFO: Got endpoints: latency-svc-dcmdh [799.418017ms] +Mar 7 03:58:43.562: INFO: Created: latency-svc-dqg5c +Mar 7 03:58:43.600: INFO: Got endpoints: latency-svc-rxf5g [800.395888ms] +Mar 7 03:58:43.616: INFO: Created: latency-svc-wnt7h +Mar 7 03:58:43.656: INFO: Got endpoints: latency-svc-vdjzf [806.197117ms] +Mar 7 03:58:43.689: INFO: Created: latency-svc-mkqhh +Mar 7 03:58:43.700: INFO: Got endpoints: latency-svc-rrgmq [801.01353ms] +Mar 7 03:58:43.714: INFO: Created: latency-svc-7qtwv +Mar 7 03:58:43.751: INFO: Got endpoints: latency-svc-zgpbl [795.225178ms] +Mar 7 03:58:43.765: INFO: Created: latency-svc-6kbg7 +Mar 7 03:58:43.803: INFO: Got endpoints: latency-svc-sq669 [778.755106ms] +Mar 7 03:58:43.818: INFO: Created: latency-svc-gsjqp +Mar 7 03:58:43.855: INFO: Got endpoints: latency-svc-spbjw [799.126442ms] +Mar 7 03:58:43.871: INFO: Created: latency-svc-cxzlp +Mar 7 03:58:43.904: INFO: Got endpoints: latency-svc-bpcm4 [804.101809ms] +Mar 7 03:58:43.918: INFO: Created: latency-svc-d86r8 +Mar 7 03:58:43.952: INFO: Got endpoints: latency-svc-6xg8m [802.592608ms] +Mar 7 03:58:43.965: INFO: Created: latency-svc-dfjjt +Mar 7 03:58:43.999: INFO: Got endpoints: latency-svc-j5h7j [797.856054ms] +Mar 7 03:58:44.013: INFO: Created: latency-svc-4k65j +Mar 7 03:58:44.049: INFO: Got endpoints: latency-svc-pbsdq [799.264863ms] +Mar 7 03:58:44.062: INFO: Created: latency-svc-7vs4d +Mar 7 03:58:44.100: INFO: Got endpoints: latency-svc-4sdd9 [800.094388ms] +Mar 7 03:58:44.112: INFO: Created: latency-svc-pqfr7 +Mar 7 03:58:44.149: INFO: Got endpoints: latency-svc-wc7ss [798.941622ms] +Mar 7 03:58:44.164: INFO: Created: latency-svc-rhzp5 +Mar 7 03:58:44.200: INFO: Got endpoints: latency-svc-p4jfb [800.468896ms] +Mar 7 03:58:44.213: INFO: Created: latency-svc-jvm7r +Mar 7 03:58:44.251: INFO: Got endpoints: latency-svc-ff4jl [802.947142ms] +Mar 7 03:58:44.262: INFO: Created: latency-svc-7mhdd +Mar 7 03:58:44.300: INFO: Got endpoints: latency-svc-dqg5c [751.2592ms] +Mar 7 03:58:44.322: INFO: Created: latency-svc-qt4l9 +Mar 7 03:58:44.351: INFO: Got endpoints: latency-svc-wnt7h [750.288801ms] +Mar 7 03:58:44.363: INFO: Created: latency-svc-s959j +Mar 7 03:58:44.401: INFO: Got endpoints: latency-svc-mkqhh [745.029812ms] +Mar 7 03:58:44.414: INFO: Created: latency-svc-nnqr2 +Mar 7 03:58:44.457: INFO: Got endpoints: latency-svc-7qtwv [756.819851ms] +Mar 7 03:58:44.470: INFO: Created: latency-svc-q2rfq +Mar 7 03:58:44.499: INFO: Got endpoints: latency-svc-6kbg7 [747.085196ms] +Mar 7 03:58:44.511: INFO: Created: latency-svc-snwlf +Mar 7 03:58:44.550: INFO: Got endpoints: latency-svc-gsjqp [746.805264ms] +Mar 7 03:58:44.562: INFO: Created: latency-svc-7vvg8 +Mar 7 03:58:44.599: INFO: Got endpoints: latency-svc-cxzlp [744.053362ms] +Mar 7 03:58:44.611: INFO: Created: latency-svc-5n8tv +Mar 7 03:58:44.649: INFO: Got endpoints: latency-svc-d86r8 [744.974491ms] +Mar 7 03:58:44.662: INFO: Created: latency-svc-xh98g +Mar 7 03:58:44.702: INFO: Got endpoints: latency-svc-dfjjt [749.375116ms] +Mar 7 03:58:44.714: INFO: Created: latency-svc-pcwbw +Mar 7 03:58:44.750: INFO: Got endpoints: latency-svc-4k65j [750.333324ms] +Mar 7 03:58:44.762: INFO: Created: latency-svc-ggwk5 +Mar 7 03:58:44.800: INFO: Got endpoints: latency-svc-7vs4d [750.667797ms] +Mar 7 03:58:44.814: INFO: Created: latency-svc-tssfk +Mar 7 03:58:44.854: INFO: Got endpoints: latency-svc-pqfr7 [754.13358ms] +Mar 7 03:58:44.870: INFO: Created: latency-svc-hwpld +Mar 7 03:58:44.900: INFO: Got endpoints: latency-svc-rhzp5 [750.387618ms] +Mar 7 03:58:44.913: INFO: Created: latency-svc-f7rt6 +Mar 7 03:58:44.949: INFO: Got endpoints: latency-svc-jvm7r [749.046278ms] +Mar 7 03:58:44.961: INFO: Created: latency-svc-qgtnf +Mar 7 03:58:44.998: INFO: Got endpoints: latency-svc-7mhdd [746.758785ms] +Mar 7 03:58:45.009: INFO: Created: latency-svc-nwh72 +Mar 7 03:58:45.050: INFO: Got endpoints: latency-svc-qt4l9 [749.338884ms] +Mar 7 03:58:45.063: INFO: Created: latency-svc-gbm4r +Mar 7 03:58:45.100: INFO: Got endpoints: latency-svc-s959j [749.793763ms] +Mar 7 03:58:45.112: INFO: Created: latency-svc-w6555 +Mar 7 03:58:45.149: INFO: Got endpoints: latency-svc-nnqr2 [748.576838ms] +Mar 7 03:58:45.161: INFO: Created: latency-svc-bvp48 +Mar 7 03:58:45.199: INFO: Got endpoints: latency-svc-q2rfq [742.154741ms] +Mar 7 03:58:45.213: INFO: Created: latency-svc-ggwsp +Mar 7 03:58:45.250: INFO: Got endpoints: latency-svc-snwlf [751.341148ms] +Mar 7 03:58:45.262: INFO: Created: latency-svc-kt7q9 +Mar 7 03:58:45.298: INFO: Got endpoints: latency-svc-7vvg8 [748.188704ms] +Mar 7 03:58:45.310: INFO: Created: latency-svc-5s4zb +Mar 7 03:58:45.350: INFO: Got endpoints: latency-svc-5n8tv [750.752216ms] +Mar 7 03:58:45.363: INFO: Created: latency-svc-6jgjd +Mar 7 03:58:45.399: INFO: Got endpoints: latency-svc-xh98g [749.988279ms] +Mar 7 03:58:45.412: INFO: Created: latency-svc-m49j8 +Mar 7 03:58:45.449: INFO: Got endpoints: latency-svc-pcwbw [746.905327ms] +Mar 7 03:58:45.460: INFO: Created: latency-svc-9gcnt +Mar 7 03:58:45.499: INFO: Got endpoints: latency-svc-ggwk5 [749.496547ms] +Mar 7 03:58:45.512: INFO: Created: latency-svc-f7l4l +Mar 7 03:58:45.549: INFO: Got endpoints: latency-svc-tssfk [748.788289ms] +Mar 7 03:58:45.562: INFO: Created: latency-svc-6tqzv +Mar 7 03:58:45.599: INFO: Got endpoints: latency-svc-hwpld [744.370073ms] +Mar 7 03:58:45.611: INFO: Created: latency-svc-xqng6 +Mar 7 03:58:45.649: INFO: Got endpoints: latency-svc-f7rt6 [749.448263ms] +Mar 7 03:58:45.671: INFO: Created: latency-svc-wd62t +Mar 7 03:58:45.700: INFO: Got endpoints: latency-svc-qgtnf [751.352551ms] +Mar 7 03:58:45.714: INFO: Created: latency-svc-8b9gf +Mar 7 03:58:45.749: INFO: Got endpoints: latency-svc-nwh72 [751.253023ms] +Mar 7 03:58:45.762: INFO: Created: latency-svc-tz689 +Mar 7 03:58:45.800: INFO: Got endpoints: latency-svc-gbm4r [750.245362ms] +Mar 7 03:58:45.817: INFO: Created: latency-svc-zk877 +Mar 7 03:58:45.857: INFO: Got endpoints: latency-svc-w6555 [756.618587ms] +Mar 7 03:58:45.871: INFO: Created: latency-svc-4zg48 +Mar 7 03:58:45.899: INFO: Got endpoints: latency-svc-bvp48 [750.008953ms] +Mar 7 03:58:45.912: INFO: Created: latency-svc-jc2tz +Mar 7 03:58:45.950: INFO: Got endpoints: latency-svc-ggwsp [750.310832ms] +Mar 7 03:58:45.962: INFO: Created: latency-svc-9d52h +Mar 7 03:58:46.000: INFO: Got endpoints: latency-svc-kt7q9 [749.837389ms] +Mar 7 03:58:46.013: INFO: Created: latency-svc-jztgc +Mar 7 03:58:46.049: INFO: Got endpoints: latency-svc-5s4zb [750.506766ms] +Mar 7 03:58:46.068: INFO: Created: latency-svc-btmbl +Mar 7 03:58:46.100: INFO: Got endpoints: latency-svc-6jgjd [749.489219ms] +Mar 7 03:58:46.112: INFO: Created: latency-svc-lhjnk +Mar 7 03:58:46.151: INFO: Got endpoints: latency-svc-m49j8 [751.573479ms] +Mar 7 03:58:46.164: INFO: Created: latency-svc-spg6v +Mar 7 03:58:46.199: INFO: Got endpoints: latency-svc-9gcnt [750.036021ms] +Mar 7 03:58:46.211: INFO: Created: latency-svc-bq4sc +Mar 7 03:58:46.250: INFO: Got endpoints: latency-svc-f7l4l [751.264869ms] +Mar 7 03:58:46.266: INFO: Created: latency-svc-b5pgh +Mar 7 03:58:46.300: INFO: Got endpoints: latency-svc-6tqzv [751.287758ms] +Mar 7 03:58:46.312: INFO: Created: latency-svc-nwgcq +Mar 7 03:58:46.348: INFO: Got endpoints: latency-svc-xqng6 [749.493914ms] +Mar 7 03:58:46.360: INFO: Created: latency-svc-52cg7 +Mar 7 03:58:46.399: INFO: Got endpoints: latency-svc-wd62t [750.166489ms] +Mar 7 03:58:46.413: INFO: Created: latency-svc-rpf2n +Mar 7 03:58:46.450: INFO: Got endpoints: latency-svc-8b9gf [749.608226ms] +Mar 7 03:58:46.462: INFO: Created: latency-svc-jtjv2 +Mar 7 03:58:46.498: INFO: Got endpoints: latency-svc-tz689 [749.172994ms] +Mar 7 03:58:46.510: INFO: Created: latency-svc-5wzv7 +Mar 7 03:58:46.550: INFO: Got endpoints: latency-svc-zk877 [750.36914ms] +Mar 7 03:58:46.564: INFO: Created: latency-svc-2wgs8 +Mar 7 03:58:46.599: INFO: Got endpoints: latency-svc-4zg48 [742.06519ms] +Mar 7 03:58:46.611: INFO: Created: latency-svc-qhptc +Mar 7 03:58:46.648: INFO: Got endpoints: latency-svc-jc2tz [749.084628ms] +Mar 7 03:58:46.660: INFO: Created: latency-svc-vx2zz +Mar 7 03:58:46.699: INFO: Got endpoints: latency-svc-9d52h [749.537957ms] +Mar 7 03:58:46.712: INFO: Created: latency-svc-nsl9m +Mar 7 03:58:46.750: INFO: Got endpoints: latency-svc-jztgc [750.122186ms] +Mar 7 03:58:46.769: INFO: Created: latency-svc-4t4gc +Mar 7 03:58:46.805: INFO: Got endpoints: latency-svc-btmbl [755.892849ms] +Mar 7 03:58:46.822: INFO: Created: latency-svc-b2c5j +Mar 7 03:58:46.861: INFO: Got endpoints: latency-svc-lhjnk [761.482419ms] +Mar 7 03:58:46.879: INFO: Created: latency-svc-c49xj +Mar 7 03:58:46.903: INFO: Got endpoints: latency-svc-spg6v [751.815222ms] +Mar 7 03:58:46.918: INFO: Created: latency-svc-pnftv +Mar 7 03:58:46.950: INFO: Got endpoints: latency-svc-bq4sc [751.404336ms] +Mar 7 03:58:46.966: INFO: Created: latency-svc-67cpn +Mar 7 03:58:47.000: INFO: Got endpoints: latency-svc-b5pgh [749.610151ms] +Mar 7 03:58:47.015: INFO: Created: latency-svc-cjszz +Mar 7 03:58:47.051: INFO: Got endpoints: latency-svc-nwgcq [751.46157ms] +Mar 7 03:58:47.067: INFO: Created: latency-svc-7zv7l +Mar 7 03:58:47.100: INFO: Got endpoints: latency-svc-52cg7 [751.430502ms] +Mar 7 03:58:47.114: INFO: Created: latency-svc-mbc48 +Mar 7 03:58:47.150: INFO: Got endpoints: latency-svc-rpf2n [750.632931ms] +Mar 7 03:58:47.164: INFO: Created: latency-svc-shxdb +Mar 7 03:58:47.258: INFO: Got endpoints: latency-svc-jtjv2 [807.84475ms] +Mar 7 03:58:47.286: INFO: Created: latency-svc-wlpcl +Mar 7 03:58:47.286: INFO: Got endpoints: latency-svc-5wzv7 [787.645254ms] +Mar 7 03:58:47.377: INFO: Got endpoints: latency-svc-2wgs8 [826.161522ms] +Mar 7 03:58:47.390: INFO: Got endpoints: latency-svc-qhptc [791.032021ms] +Mar 7 03:58:47.401: INFO: Created: latency-svc-2tvnp +Mar 7 03:58:47.444: INFO: Got endpoints: latency-svc-vx2zz [795.318478ms] +Mar 7 03:58:47.528: INFO: Got endpoints: latency-svc-4t4gc [778.228841ms] +Mar 7 03:58:47.529: INFO: Got endpoints: latency-svc-nsl9m [829.992826ms] +Mar 7 03:58:47.537: INFO: Created: latency-svc-4btk4 +Mar 7 03:58:47.544: INFO: Created: latency-svc-dmjzg +Mar 7 03:58:47.553: INFO: Got endpoints: latency-svc-b2c5j [747.898218ms] +Mar 7 03:58:47.561: INFO: Created: latency-svc-qp2sl +Mar 7 03:58:47.570: INFO: Created: latency-svc-wm7tl +Mar 7 03:58:47.580: INFO: Created: latency-svc-qwl5w +Mar 7 03:58:47.589: INFO: Created: latency-svc-vbcpv +Mar 7 03:58:47.600: INFO: Got endpoints: latency-svc-c49xj [738.418917ms] +Mar 7 03:58:47.614: INFO: Created: latency-svc-zc5vn +Mar 7 03:58:47.650: INFO: Got endpoints: latency-svc-pnftv [747.560506ms] +Mar 7 03:58:47.664: INFO: Created: latency-svc-7m727 +Mar 7 03:58:47.700: INFO: Got endpoints: latency-svc-67cpn [749.442612ms] +Mar 7 03:58:47.749: INFO: Got endpoints: latency-svc-cjszz [749.109978ms] +Mar 7 03:58:47.800: INFO: Got endpoints: latency-svc-7zv7l [748.160261ms] +Mar 7 03:58:47.851: INFO: Got endpoints: latency-svc-mbc48 [751.406308ms] +Mar 7 03:58:47.900: INFO: Got endpoints: latency-svc-shxdb [749.45259ms] +Mar 7 03:58:47.949: INFO: Got endpoints: latency-svc-wlpcl [691.475424ms] +Mar 7 03:58:47.999: INFO: Got endpoints: latency-svc-2tvnp [712.747088ms] +Mar 7 03:58:48.049: INFO: Got endpoints: latency-svc-4btk4 [672.28271ms] +Mar 7 03:58:48.099: INFO: Got endpoints: latency-svc-dmjzg [708.774112ms] +Mar 7 03:58:48.151: INFO: Got endpoints: latency-svc-qp2sl [706.831428ms] +Mar 7 03:58:48.200: INFO: Got endpoints: latency-svc-wm7tl [671.441942ms] +Mar 7 03:58:48.256: INFO: Got endpoints: latency-svc-qwl5w [726.205947ms] +Mar 7 03:58:48.299: INFO: Got endpoints: latency-svc-vbcpv [745.504356ms] +Mar 7 03:58:48.349: INFO: Got endpoints: latency-svc-zc5vn [749.031396ms] +Mar 7 03:58:48.405: INFO: Got endpoints: latency-svc-7m727 [754.896707ms] +Mar 7 03:58:48.405: INFO: Latencies: [31.158168ms 39.011589ms 58.413669ms 83.075363ms 167.184941ms 169.438992ms 170.514993ms 170.759481ms 170.931565ms 171.094617ms 172.015773ms 172.491067ms 173.12267ms 174.294979ms 174.424909ms 175.026081ms 175.687866ms 176.218257ms 176.759899ms 176.973339ms 177.061987ms 177.66831ms 179.021303ms 179.210732ms 179.619966ms 180.840444ms 182.611034ms 192.636759ms 202.635564ms 208.829358ms 221.316519ms 231.453921ms 237.089393ms 245.178097ms 255.250404ms 265.987552ms 266.269168ms 276.62936ms 278.557971ms 281.414232ms 286.129838ms 288.34143ms 288.931582ms 302.848391ms 320.496661ms 349.229337ms 396.87649ms 430.170879ms 465.549472ms 524.081541ms 549.783175ms 591.071492ms 632.473439ms 670.724159ms 671.441942ms 672.28271ms 691.475424ms 706.831428ms 708.774112ms 711.530654ms 712.747088ms 726.205947ms 738.418917ms 738.621167ms 740.845916ms 742.06519ms 742.154741ms 744.053362ms 744.370073ms 744.974491ms 745.029812ms 745.504356ms 746.095325ms 746.333806ms 746.758785ms 746.805264ms 746.905327ms 747.085196ms 747.123108ms 747.266547ms 747.560506ms 747.729685ms 747.898218ms 748.01517ms 748.160261ms 748.188704ms 748.444824ms 748.576838ms 748.584214ms 748.597223ms 748.642331ms 748.788289ms 748.821685ms 749.031396ms 749.046278ms 749.084628ms 749.109978ms 749.172994ms 749.18525ms 749.197646ms 749.249433ms 749.327222ms 749.338884ms 749.375116ms 749.442612ms 749.448263ms 749.45259ms 749.489219ms 749.493914ms 749.496547ms 749.537957ms 749.58895ms 749.591719ms 749.608226ms 749.610151ms 749.793763ms 749.837389ms 749.853201ms 749.856735ms 749.988279ms 750.008953ms 750.017517ms 750.036021ms 750.052264ms 750.077683ms 750.122186ms 750.128858ms 750.136532ms 750.166489ms 750.245362ms 750.288801ms 750.292926ms 750.310832ms 750.333324ms 750.36914ms 750.387618ms 750.437102ms 750.506766ms 750.559324ms 750.632931ms 750.655596ms 750.667797ms 750.752216ms 750.836978ms 750.857541ms 750.907232ms 750.91249ms 750.986577ms 750.994974ms 751.012673ms 751.183664ms 751.253023ms 751.2592ms 751.264869ms 751.287758ms 751.341148ms 751.352551ms 751.404336ms 751.406308ms 751.430502ms 751.443507ms 751.46157ms 751.573479ms 751.815222ms 752.456389ms 752.787601ms 753.302944ms 754.13358ms 754.703853ms 754.896707ms 755.892849ms 756.511533ms 756.618587ms 756.819851ms 757.276755ms 761.482419ms 763.067313ms 773.358114ms 778.228841ms 778.755106ms 787.645254ms 791.032021ms 795.225178ms 795.318478ms 797.856054ms 798.941622ms 799.126442ms 799.264863ms 799.418017ms 800.094388ms 800.395888ms 800.468896ms 801.01353ms 802.592608ms 802.947142ms 804.101809ms 806.197117ms 807.84475ms 826.161522ms 829.992826ms] +Mar 7 03:58:48.405: INFO: 50 %ile: 749.249433ms +Mar 7 03:58:48.405: INFO: 90 %ile: 787.645254ms +Mar 7 03:58:48.405: INFO: 99 %ile: 826.161522ms +Mar 7 03:58:48.405: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + test/e2e/framework/framework.go:187 +Mar 7 03:58:48.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-9373" for this suite. 03/07/23 03:58:48.412 +{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","completed":340,"skipped":6206,"failed":0} +------------------------------ +• [SLOW TEST] [9.813 seconds] +[sig-network] Service endpoints latency +test/e2e/network/common/framework.go:23 + should not be very high [Conformance] + test/e2e/network/service_latency.go:59 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Service endpoints latency + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:58:38.604 + Mar 7 03:58:38.605: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename svc-latency 03/07/23 03:58:38.606 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:58:38.622 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:58:38.624 + [It] should not be very high [Conformance] + test/e2e/network/service_latency.go:59 + Mar 7 03:58:38.625: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: creating replication controller svc-latency-rc in namespace svc-latency-9373 03/07/23 03:58:38.626 + I0307 03:58:38.630638 22 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9373, replica count: 1 + I0307 03:58:39.681532 22 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Mar 7 03:58:39.793: INFO: Created: latency-svc-mb2wz + Mar 7 03:58:39.802: INFO: Got endpoints: latency-svc-mb2wz [20.099598ms] + Mar 7 03:58:39.820: INFO: Created: latency-svc-czs7b + Mar 7 03:58:39.834: INFO: Got endpoints: latency-svc-czs7b [31.158168ms] + Mar 7 03:58:39.837: INFO: Created: latency-svc-jtkqq + Mar 7 03:58:39.842: INFO: Got endpoints: latency-svc-jtkqq [39.011589ms] + Mar 7 03:58:39.852: INFO: Created: latency-svc-n2jlr + Mar 7 03:58:39.861: INFO: Got endpoints: latency-svc-n2jlr [58.413669ms] + Mar 7 03:58:39.878: INFO: Created: latency-svc-cmmdr + Mar 7 03:58:39.886: INFO: Got endpoints: latency-svc-cmmdr [83.075363ms] + Mar 7 03:58:39.941: INFO: Created: latency-svc-kr7bz + Mar 7 03:58:39.978: INFO: Created: latency-svc-m2flz + Mar 7 03:58:39.978: INFO: Got endpoints: latency-svc-m2flz [176.218257ms] + Mar 7 03:58:39.996: INFO: Got endpoints: latency-svc-kr7bz [192.636759ms] + Mar 7 03:58:40.001: INFO: Created: latency-svc-dpsrx + Mar 7 03:58:40.012: INFO: Got endpoints: latency-svc-dpsrx [208.829358ms] + Mar 7 03:58:40.017: INFO: Created: latency-svc-llmdf + Mar 7 03:58:40.024: INFO: Got endpoints: latency-svc-llmdf [221.316519ms] + Mar 7 03:58:40.027: INFO: Created: latency-svc-rflhg + Mar 7 03:58:40.035: INFO: Got endpoints: latency-svc-rflhg [231.453921ms] + Mar 7 03:58:40.039: INFO: Created: latency-svc-876sx + Mar 7 03:58:40.049: INFO: Got endpoints: latency-svc-876sx [245.178097ms] + Mar 7 03:58:40.053: INFO: Created: latency-svc-dmggm + Mar 7 03:58:40.059: INFO: Got endpoints: latency-svc-dmggm [255.250404ms] + Mar 7 03:58:40.064: INFO: Created: latency-svc-zmlff + Mar 7 03:58:40.070: INFO: Got endpoints: latency-svc-zmlff [266.269168ms] + Mar 7 03:58:40.076: INFO: Created: latency-svc-5pfss + Mar 7 03:58:40.086: INFO: Got endpoints: latency-svc-5pfss [281.414232ms] + Mar 7 03:58:40.088: INFO: Created: latency-svc-4drx8 + Mar 7 03:58:40.093: INFO: Got endpoints: latency-svc-4drx8 [288.931582ms] + Mar 7 03:58:40.100: INFO: Created: latency-svc-7nxlv + Mar 7 03:58:40.107: INFO: Got endpoints: latency-svc-7nxlv [302.848391ms] + Mar 7 03:58:40.112: INFO: Created: latency-svc-4qb2r + Mar 7 03:58:40.121: INFO: Created: latency-svc-x8tr8 + Mar 7 03:58:40.122: INFO: Got endpoints: latency-svc-4qb2r [288.34143ms] + Mar 7 03:58:40.128: INFO: Got endpoints: latency-svc-x8tr8 [286.129838ms] + Mar 7 03:58:40.135: INFO: Created: latency-svc-q58r2 + Mar 7 03:58:40.140: INFO: Got endpoints: latency-svc-q58r2 [278.557971ms] + Mar 7 03:58:40.143: INFO: Created: latency-svc-8ftc2 + Mar 7 03:58:40.152: INFO: Got endpoints: latency-svc-8ftc2 [265.987552ms] + Mar 7 03:58:40.157: INFO: Created: latency-svc-t8zp8 + Mar 7 03:58:40.161: INFO: Got endpoints: latency-svc-t8zp8 [182.611034ms] + Mar 7 03:58:40.165: INFO: Created: latency-svc-vkt5s + Mar 7 03:58:40.171: INFO: Got endpoints: latency-svc-vkt5s [175.687866ms] + Mar 7 03:58:40.178: INFO: Created: latency-svc-bctdl + Mar 7 03:58:40.186: INFO: Got endpoints: latency-svc-bctdl [174.424909ms] + Mar 7 03:58:40.191: INFO: Created: latency-svc-pd6mp + Mar 7 03:58:40.195: INFO: Got endpoints: latency-svc-pd6mp [171.094617ms] + Mar 7 03:58:40.203: INFO: Created: latency-svc-7qc8x + Mar 7 03:58:40.212: INFO: Got endpoints: latency-svc-7qc8x [176.759899ms] + Mar 7 03:58:40.216: INFO: Created: latency-svc-4dx7w + Mar 7 03:58:40.220: INFO: Got endpoints: latency-svc-4dx7w [170.931565ms] + Mar 7 03:58:40.227: INFO: Created: latency-svc-6d6h6 + Mar 7 03:58:40.231: INFO: Got endpoints: latency-svc-6d6h6 [172.491067ms] + Mar 7 03:58:40.242: INFO: Created: latency-svc-mvhwk + Mar 7 03:58:40.249: INFO: Got endpoints: latency-svc-mvhwk [179.021303ms] + Mar 7 03:58:40.256: INFO: Created: latency-svc-gc879 + Mar 7 03:58:40.263: INFO: Got endpoints: latency-svc-gc879 [176.973339ms] + Mar 7 03:58:40.265: INFO: Created: latency-svc-m5zvh + Mar 7 03:58:40.272: INFO: Got endpoints: latency-svc-m5zvh [179.619966ms] + Mar 7 03:58:40.276: INFO: Created: latency-svc-qd7w2 + Mar 7 03:58:40.282: INFO: Got endpoints: latency-svc-qd7w2 [174.294979ms] + Mar 7 03:58:40.290: INFO: Created: latency-svc-jg5tf + Mar 7 03:58:40.299: INFO: Got endpoints: latency-svc-jg5tf [177.061987ms] + Mar 7 03:58:40.299: INFO: Created: latency-svc-25x9x + Mar 7 03:58:40.303: INFO: Got endpoints: latency-svc-25x9x [175.026081ms] + Mar 7 03:58:40.315: INFO: Created: latency-svc-6gcsl + Mar 7 03:58:40.319: INFO: Got endpoints: latency-svc-6gcsl [179.210732ms] + Mar 7 03:58:40.321: INFO: Created: latency-svc-nwstb + Mar 7 03:58:40.333: INFO: Got endpoints: latency-svc-nwstb [180.840444ms] + Mar 7 03:58:40.333: INFO: Created: latency-svc-v559c + Mar 7 03:58:40.334: INFO: Got endpoints: latency-svc-v559c [173.12267ms] + Mar 7 03:58:40.341: INFO: Created: latency-svc-qz5zd + Mar 7 03:58:40.349: INFO: Got endpoints: latency-svc-qz5zd [177.66831ms] + Mar 7 03:58:40.353: INFO: Created: latency-svc-p5hlh + Mar 7 03:58:40.359: INFO: Got endpoints: latency-svc-p5hlh [172.015773ms] + Mar 7 03:58:40.360: INFO: Created: latency-svc-tl8zd + Mar 7 03:58:40.366: INFO: Got endpoints: latency-svc-tl8zd [170.759481ms] + Mar 7 03:58:40.371: INFO: Created: latency-svc-jb8hs + Mar 7 03:58:40.379: INFO: Got endpoints: latency-svc-jb8hs [167.184941ms] + Mar 7 03:58:40.384: INFO: Created: latency-svc-68rh5 + Mar 7 03:58:40.389: INFO: Got endpoints: latency-svc-68rh5 [169.438992ms] + Mar 7 03:58:40.392: INFO: Created: latency-svc-ltx56 + Mar 7 03:58:40.402: INFO: Got endpoints: latency-svc-ltx56 [170.514993ms] + Mar 7 03:58:40.404: INFO: Created: latency-svc-wf4nw + Mar 7 03:58:40.416: INFO: Created: latency-svc-2s59g + Mar 7 03:58:40.426: INFO: Created: latency-svc-x587s + Mar 7 03:58:40.434: INFO: Created: latency-svc-5snc6 + Mar 7 03:58:40.445: INFO: Created: latency-svc-qmgbc + Mar 7 03:58:40.452: INFO: Got endpoints: latency-svc-wf4nw [202.635564ms] + Mar 7 03:58:40.461: INFO: Created: latency-svc-8q8vw + Mar 7 03:58:40.472: INFO: Created: latency-svc-5gd4w + Mar 7 03:58:40.481: INFO: Created: latency-svc-7z5nc + Mar 7 03:58:40.490: INFO: Created: latency-svc-8rx5l + Mar 7 03:58:40.500: INFO: Got endpoints: latency-svc-2s59g [237.089393ms] + Mar 7 03:58:40.503: INFO: Created: latency-svc-dnz7s + Mar 7 03:58:40.513: INFO: Created: latency-svc-7sncr + Mar 7 03:58:40.525: INFO: Created: latency-svc-qvzfk + Mar 7 03:58:40.534: INFO: Created: latency-svc-cgjxn + Mar 7 03:58:40.542: INFO: Created: latency-svc-22fjf + Mar 7 03:58:40.549: INFO: Got endpoints: latency-svc-x587s [276.62936ms] + Mar 7 03:58:40.553: INFO: Created: latency-svc-xk24r + Mar 7 03:58:40.561: INFO: Created: latency-svc-qp2kg + Mar 7 03:58:40.571: INFO: Created: latency-svc-45782 + Mar 7 03:58:40.583: INFO: Created: latency-svc-pb6zt + Mar 7 03:58:40.602: INFO: Got endpoints: latency-svc-5snc6 [320.496661ms] + Mar 7 03:58:40.614: INFO: Created: latency-svc-xm5v8 + Mar 7 03:58:40.648: INFO: Got endpoints: latency-svc-qmgbc [349.229337ms] + Mar 7 03:58:40.662: INFO: Created: latency-svc-txrhx + Mar 7 03:58:40.700: INFO: Got endpoints: latency-svc-8q8vw [396.87649ms] + Mar 7 03:58:40.714: INFO: Created: latency-svc-npzk4 + Mar 7 03:58:40.749: INFO: Got endpoints: latency-svc-5gd4w [430.170879ms] + Mar 7 03:58:40.760: INFO: Created: latency-svc-pn8dw + Mar 7 03:58:40.798: INFO: Got endpoints: latency-svc-7z5nc [465.549472ms] + Mar 7 03:58:40.814: INFO: Created: latency-svc-nv4qb + Mar 7 03:58:40.858: INFO: Got endpoints: latency-svc-8rx5l [524.081541ms] + Mar 7 03:58:40.879: INFO: Created: latency-svc-dk5sd + Mar 7 03:58:40.899: INFO: Got endpoints: latency-svc-dnz7s [549.783175ms] + Mar 7 03:58:40.912: INFO: Created: latency-svc-n7csz + Mar 7 03:58:40.950: INFO: Got endpoints: latency-svc-7sncr [591.071492ms] + Mar 7 03:58:40.962: INFO: Created: latency-svc-mffjp + Mar 7 03:58:40.999: INFO: Got endpoints: latency-svc-qvzfk [632.473439ms] + Mar 7 03:58:41.012: INFO: Created: latency-svc-9chxd + Mar 7 03:58:41.050: INFO: Got endpoints: latency-svc-cgjxn [670.724159ms] + Mar 7 03:58:41.061: INFO: Created: latency-svc-2vl4w + Mar 7 03:58:41.101: INFO: Got endpoints: latency-svc-22fjf [711.530654ms] + Mar 7 03:58:41.121: INFO: Created: latency-svc-qd9vs + Mar 7 03:58:41.149: INFO: Got endpoints: latency-svc-xk24r [747.123108ms] + Mar 7 03:58:41.161: INFO: Created: latency-svc-27dn8 + Mar 7 03:58:41.199: INFO: Got endpoints: latency-svc-qp2kg [747.266547ms] + Mar 7 03:58:41.210: INFO: Created: latency-svc-24bkf + Mar 7 03:58:41.250: INFO: Got endpoints: latency-svc-45782 [749.853201ms] + Mar 7 03:58:41.267: INFO: Created: latency-svc-9hddd + Mar 7 03:58:41.300: INFO: Got endpoints: latency-svc-pb6zt [750.437102ms] + Mar 7 03:58:41.313: INFO: Created: latency-svc-qtnmx + Mar 7 03:58:41.348: INFO: Got endpoints: latency-svc-xm5v8 [746.333806ms] + Mar 7 03:58:41.361: INFO: Created: latency-svc-xvzpk + Mar 7 03:58:41.398: INFO: Got endpoints: latency-svc-txrhx [749.197646ms] + Mar 7 03:58:41.410: INFO: Created: latency-svc-9kjwv + Mar 7 03:58:41.448: INFO: Got endpoints: latency-svc-npzk4 [748.01517ms] + Mar 7 03:58:41.468: INFO: Created: latency-svc-8h465 + Mar 7 03:58:41.498: INFO: Got endpoints: latency-svc-pn8dw [749.249433ms] + Mar 7 03:58:41.512: INFO: Created: latency-svc-x8p6z + Mar 7 03:58:41.549: INFO: Got endpoints: latency-svc-nv4qb [751.012673ms] + Mar 7 03:58:41.563: INFO: Created: latency-svc-2tphj + Mar 7 03:58:41.599: INFO: Got endpoints: latency-svc-dk5sd [740.845916ms] + Mar 7 03:58:41.613: INFO: Created: latency-svc-fwmmz + Mar 7 03:58:41.650: INFO: Got endpoints: latency-svc-n7csz [751.183664ms] + Mar 7 03:58:41.662: INFO: Created: latency-svc-h2fsc + Mar 7 03:58:41.701: INFO: Got endpoints: latency-svc-mffjp [751.443507ms] + Mar 7 03:58:41.714: INFO: Created: latency-svc-88rsr + Mar 7 03:58:41.748: INFO: Got endpoints: latency-svc-9chxd [749.591719ms] + Mar 7 03:58:41.761: INFO: Created: latency-svc-smf8t + Mar 7 03:58:41.799: INFO: Got endpoints: latency-svc-2vl4w [749.58895ms] + Mar 7 03:58:41.813: INFO: Created: latency-svc-zmr7q + Mar 7 03:58:41.849: INFO: Got endpoints: latency-svc-qd9vs [748.584214ms] + Mar 7 03:58:41.867: INFO: Created: latency-svc-bx2j4 + Mar 7 03:58:41.900: INFO: Got endpoints: latency-svc-27dn8 [750.559324ms] + Mar 7 03:58:41.913: INFO: Created: latency-svc-vs9rl + Mar 7 03:58:41.949: INFO: Got endpoints: latency-svc-24bkf [750.136532ms] + Mar 7 03:58:41.961: INFO: Created: latency-svc-4g776 + Mar 7 03:58:41.999: INFO: Got endpoints: latency-svc-9hddd [749.18525ms] + Mar 7 03:58:42.013: INFO: Created: latency-svc-4tgdh + Mar 7 03:58:42.050: INFO: Got endpoints: latency-svc-qtnmx [750.128858ms] + Mar 7 03:58:42.062: INFO: Created: latency-svc-kp7g8 + Mar 7 03:58:42.098: INFO: Got endpoints: latency-svc-xvzpk [749.856735ms] + Mar 7 03:58:42.111: INFO: Created: latency-svc-q2f9n + Mar 7 03:58:42.161: INFO: Got endpoints: latency-svc-9kjwv [763.067313ms] + Mar 7 03:58:42.173: INFO: Created: latency-svc-99q56 + Mar 7 03:58:42.199: INFO: Got endpoints: latency-svc-8h465 [750.857541ms] + Mar 7 03:58:42.211: INFO: Created: latency-svc-7cd6k + Mar 7 03:58:42.251: INFO: Got endpoints: latency-svc-x8p6z [752.787601ms] + Mar 7 03:58:42.263: INFO: Created: latency-svc-chbtr + Mar 7 03:58:42.300: INFO: Got endpoints: latency-svc-2tphj [750.292926ms] + Mar 7 03:58:42.313: INFO: Created: latency-svc-b7845 + Mar 7 03:58:42.348: INFO: Got endpoints: latency-svc-fwmmz [748.444824ms] + Mar 7 03:58:42.361: INFO: Created: latency-svc-dkxvh + Mar 7 03:58:42.399: INFO: Got endpoints: latency-svc-h2fsc [748.642331ms] + Mar 7 03:58:42.413: INFO: Created: latency-svc-xjkcc + Mar 7 03:58:42.450: INFO: Got endpoints: latency-svc-88rsr [749.327222ms] + Mar 7 03:58:42.463: INFO: Created: latency-svc-4j9gt + Mar 7 03:58:42.498: INFO: Got endpoints: latency-svc-smf8t [750.077683ms] + Mar 7 03:58:42.511: INFO: Created: latency-svc-mbflk + Mar 7 03:58:42.554: INFO: Got endpoints: latency-svc-zmr7q [754.703853ms] + Mar 7 03:58:42.569: INFO: Created: latency-svc-mz66v + Mar 7 03:58:42.603: INFO: Got endpoints: latency-svc-bx2j4 [753.302944ms] + Mar 7 03:58:42.620: INFO: Created: latency-svc-wzq4l + Mar 7 03:58:42.648: INFO: Got endpoints: latency-svc-vs9rl [748.597223ms] + Mar 7 03:58:42.662: INFO: Created: latency-svc-wjfbj + Mar 7 03:58:42.699: INFO: Got endpoints: latency-svc-4g776 [750.017517ms] + Mar 7 03:58:42.710: INFO: Created: latency-svc-6rqjx + Mar 7 03:58:42.750: INFO: Got endpoints: latency-svc-4tgdh [750.655596ms] + Mar 7 03:58:42.763: INFO: Created: latency-svc-dcmdh + Mar 7 03:58:42.800: INFO: Got endpoints: latency-svc-kp7g8 [750.052264ms] + Mar 7 03:58:42.816: INFO: Created: latency-svc-rxf5g + Mar 7 03:58:42.849: INFO: Got endpoints: latency-svc-q2f9n [750.986577ms] + Mar 7 03:58:42.863: INFO: Created: latency-svc-vdjzf + Mar 7 03:58:42.899: INFO: Got endpoints: latency-svc-99q56 [738.621167ms] + Mar 7 03:58:42.956: INFO: Got endpoints: latency-svc-7cd6k [757.276755ms] + Mar 7 03:58:42.959: INFO: Created: latency-svc-rrgmq + Mar 7 03:58:42.977: INFO: Created: latency-svc-zgpbl + Mar 7 03:58:43.025: INFO: Got endpoints: latency-svc-chbtr [773.358114ms] + Mar 7 03:58:43.043: INFO: Created: latency-svc-sq669 + Mar 7 03:58:43.056: INFO: Got endpoints: latency-svc-b7845 [756.511533ms] + Mar 7 03:58:43.071: INFO: Created: latency-svc-spbjw + Mar 7 03:58:43.100: INFO: Got endpoints: latency-svc-dkxvh [752.456389ms] + Mar 7 03:58:43.118: INFO: Created: latency-svc-bpcm4 + Mar 7 03:58:43.150: INFO: Got endpoints: latency-svc-xjkcc [750.907232ms] + Mar 7 03:58:43.164: INFO: Created: latency-svc-6xg8m + Mar 7 03:58:43.201: INFO: Got endpoints: latency-svc-4j9gt [750.836978ms] + Mar 7 03:58:43.214: INFO: Created: latency-svc-j5h7j + Mar 7 03:58:43.250: INFO: Got endpoints: latency-svc-mbflk [750.994974ms] + Mar 7 03:58:43.263: INFO: Created: latency-svc-pbsdq + Mar 7 03:58:43.300: INFO: Got endpoints: latency-svc-mz66v [746.095325ms] + Mar 7 03:58:43.311: INFO: Created: latency-svc-4sdd9 + Mar 7 03:58:43.350: INFO: Got endpoints: latency-svc-wzq4l [747.729685ms] + Mar 7 03:58:43.364: INFO: Created: latency-svc-wc7ss + Mar 7 03:58:43.399: INFO: Got endpoints: latency-svc-wjfbj [750.91249ms] + Mar 7 03:58:43.411: INFO: Created: latency-svc-p4jfb + Mar 7 03:58:43.448: INFO: Got endpoints: latency-svc-6rqjx [748.821685ms] + Mar 7 03:58:43.460: INFO: Created: latency-svc-ff4jl + Mar 7 03:58:43.549: INFO: Got endpoints: latency-svc-dcmdh [799.418017ms] + Mar 7 03:58:43.562: INFO: Created: latency-svc-dqg5c + Mar 7 03:58:43.600: INFO: Got endpoints: latency-svc-rxf5g [800.395888ms] + Mar 7 03:58:43.616: INFO: Created: latency-svc-wnt7h + Mar 7 03:58:43.656: INFO: Got endpoints: latency-svc-vdjzf [806.197117ms] + Mar 7 03:58:43.689: INFO: Created: latency-svc-mkqhh + Mar 7 03:58:43.700: INFO: Got endpoints: latency-svc-rrgmq [801.01353ms] + Mar 7 03:58:43.714: INFO: Created: latency-svc-7qtwv + Mar 7 03:58:43.751: INFO: Got endpoints: latency-svc-zgpbl [795.225178ms] + Mar 7 03:58:43.765: INFO: Created: latency-svc-6kbg7 + Mar 7 03:58:43.803: INFO: Got endpoints: latency-svc-sq669 [778.755106ms] + Mar 7 03:58:43.818: INFO: Created: latency-svc-gsjqp + Mar 7 03:58:43.855: INFO: Got endpoints: latency-svc-spbjw [799.126442ms] + Mar 7 03:58:43.871: INFO: Created: latency-svc-cxzlp + Mar 7 03:58:43.904: INFO: Got endpoints: latency-svc-bpcm4 [804.101809ms] + Mar 7 03:58:43.918: INFO: Created: latency-svc-d86r8 + Mar 7 03:58:43.952: INFO: Got endpoints: latency-svc-6xg8m [802.592608ms] + Mar 7 03:58:43.965: INFO: Created: latency-svc-dfjjt + Mar 7 03:58:43.999: INFO: Got endpoints: latency-svc-j5h7j [797.856054ms] + Mar 7 03:58:44.013: INFO: Created: latency-svc-4k65j + Mar 7 03:58:44.049: INFO: Got endpoints: latency-svc-pbsdq [799.264863ms] + Mar 7 03:58:44.062: INFO: Created: latency-svc-7vs4d + Mar 7 03:58:44.100: INFO: Got endpoints: latency-svc-4sdd9 [800.094388ms] + Mar 7 03:58:44.112: INFO: Created: latency-svc-pqfr7 + Mar 7 03:58:44.149: INFO: Got endpoints: latency-svc-wc7ss [798.941622ms] + Mar 7 03:58:44.164: INFO: Created: latency-svc-rhzp5 + Mar 7 03:58:44.200: INFO: Got endpoints: latency-svc-p4jfb [800.468896ms] + Mar 7 03:58:44.213: INFO: Created: latency-svc-jvm7r + Mar 7 03:58:44.251: INFO: Got endpoints: latency-svc-ff4jl [802.947142ms] + Mar 7 03:58:44.262: INFO: Created: latency-svc-7mhdd + Mar 7 03:58:44.300: INFO: Got endpoints: latency-svc-dqg5c [751.2592ms] + Mar 7 03:58:44.322: INFO: Created: latency-svc-qt4l9 + Mar 7 03:58:44.351: INFO: Got endpoints: latency-svc-wnt7h [750.288801ms] + Mar 7 03:58:44.363: INFO: Created: latency-svc-s959j + Mar 7 03:58:44.401: INFO: Got endpoints: latency-svc-mkqhh [745.029812ms] + Mar 7 03:58:44.414: INFO: Created: latency-svc-nnqr2 + Mar 7 03:58:44.457: INFO: Got endpoints: latency-svc-7qtwv [756.819851ms] + Mar 7 03:58:44.470: INFO: Created: latency-svc-q2rfq + Mar 7 03:58:44.499: INFO: Got endpoints: latency-svc-6kbg7 [747.085196ms] + Mar 7 03:58:44.511: INFO: Created: latency-svc-snwlf + Mar 7 03:58:44.550: INFO: Got endpoints: latency-svc-gsjqp [746.805264ms] + Mar 7 03:58:44.562: INFO: Created: latency-svc-7vvg8 + Mar 7 03:58:44.599: INFO: Got endpoints: latency-svc-cxzlp [744.053362ms] + Mar 7 03:58:44.611: INFO: Created: latency-svc-5n8tv + Mar 7 03:58:44.649: INFO: Got endpoints: latency-svc-d86r8 [744.974491ms] + Mar 7 03:58:44.662: INFO: Created: latency-svc-xh98g + Mar 7 03:58:44.702: INFO: Got endpoints: latency-svc-dfjjt [749.375116ms] + Mar 7 03:58:44.714: INFO: Created: latency-svc-pcwbw + Mar 7 03:58:44.750: INFO: Got endpoints: latency-svc-4k65j [750.333324ms] + Mar 7 03:58:44.762: INFO: Created: latency-svc-ggwk5 + Mar 7 03:58:44.800: INFO: Got endpoints: latency-svc-7vs4d [750.667797ms] + Mar 7 03:58:44.814: INFO: Created: latency-svc-tssfk + Mar 7 03:58:44.854: INFO: Got endpoints: latency-svc-pqfr7 [754.13358ms] + Mar 7 03:58:44.870: INFO: Created: latency-svc-hwpld + Mar 7 03:58:44.900: INFO: Got endpoints: latency-svc-rhzp5 [750.387618ms] + Mar 7 03:58:44.913: INFO: Created: latency-svc-f7rt6 + Mar 7 03:58:44.949: INFO: Got endpoints: latency-svc-jvm7r [749.046278ms] + Mar 7 03:58:44.961: INFO: Created: latency-svc-qgtnf + Mar 7 03:58:44.998: INFO: Got endpoints: latency-svc-7mhdd [746.758785ms] + Mar 7 03:58:45.009: INFO: Created: latency-svc-nwh72 + Mar 7 03:58:45.050: INFO: Got endpoints: latency-svc-qt4l9 [749.338884ms] + Mar 7 03:58:45.063: INFO: Created: latency-svc-gbm4r + Mar 7 03:58:45.100: INFO: Got endpoints: latency-svc-s959j [749.793763ms] + Mar 7 03:58:45.112: INFO: Created: latency-svc-w6555 + Mar 7 03:58:45.149: INFO: Got endpoints: latency-svc-nnqr2 [748.576838ms] + Mar 7 03:58:45.161: INFO: Created: latency-svc-bvp48 + Mar 7 03:58:45.199: INFO: Got endpoints: latency-svc-q2rfq [742.154741ms] + Mar 7 03:58:45.213: INFO: Created: latency-svc-ggwsp + Mar 7 03:58:45.250: INFO: Got endpoints: latency-svc-snwlf [751.341148ms] + Mar 7 03:58:45.262: INFO: Created: latency-svc-kt7q9 + Mar 7 03:58:45.298: INFO: Got endpoints: latency-svc-7vvg8 [748.188704ms] + Mar 7 03:58:45.310: INFO: Created: latency-svc-5s4zb + Mar 7 03:58:45.350: INFO: Got endpoints: latency-svc-5n8tv [750.752216ms] + Mar 7 03:58:45.363: INFO: Created: latency-svc-6jgjd + Mar 7 03:58:45.399: INFO: Got endpoints: latency-svc-xh98g [749.988279ms] + Mar 7 03:58:45.412: INFO: Created: latency-svc-m49j8 + Mar 7 03:58:45.449: INFO: Got endpoints: latency-svc-pcwbw [746.905327ms] + Mar 7 03:58:45.460: INFO: Created: latency-svc-9gcnt + Mar 7 03:58:45.499: INFO: Got endpoints: latency-svc-ggwk5 [749.496547ms] + Mar 7 03:58:45.512: INFO: Created: latency-svc-f7l4l + Mar 7 03:58:45.549: INFO: Got endpoints: latency-svc-tssfk [748.788289ms] + Mar 7 03:58:45.562: INFO: Created: latency-svc-6tqzv + Mar 7 03:58:45.599: INFO: Got endpoints: latency-svc-hwpld [744.370073ms] + Mar 7 03:58:45.611: INFO: Created: latency-svc-xqng6 + Mar 7 03:58:45.649: INFO: Got endpoints: latency-svc-f7rt6 [749.448263ms] + Mar 7 03:58:45.671: INFO: Created: latency-svc-wd62t + Mar 7 03:58:45.700: INFO: Got endpoints: latency-svc-qgtnf [751.352551ms] + Mar 7 03:58:45.714: INFO: Created: latency-svc-8b9gf + Mar 7 03:58:45.749: INFO: Got endpoints: latency-svc-nwh72 [751.253023ms] + Mar 7 03:58:45.762: INFO: Created: latency-svc-tz689 + Mar 7 03:58:45.800: INFO: Got endpoints: latency-svc-gbm4r [750.245362ms] + Mar 7 03:58:45.817: INFO: Created: latency-svc-zk877 + Mar 7 03:58:45.857: INFO: Got endpoints: latency-svc-w6555 [756.618587ms] + Mar 7 03:58:45.871: INFO: Created: latency-svc-4zg48 + Mar 7 03:58:45.899: INFO: Got endpoints: latency-svc-bvp48 [750.008953ms] + Mar 7 03:58:45.912: INFO: Created: latency-svc-jc2tz + Mar 7 03:58:45.950: INFO: Got endpoints: latency-svc-ggwsp [750.310832ms] + Mar 7 03:58:45.962: INFO: Created: latency-svc-9d52h + Mar 7 03:58:46.000: INFO: Got endpoints: latency-svc-kt7q9 [749.837389ms] + Mar 7 03:58:46.013: INFO: Created: latency-svc-jztgc + Mar 7 03:58:46.049: INFO: Got endpoints: latency-svc-5s4zb [750.506766ms] + Mar 7 03:58:46.068: INFO: Created: latency-svc-btmbl + Mar 7 03:58:46.100: INFO: Got endpoints: latency-svc-6jgjd [749.489219ms] + Mar 7 03:58:46.112: INFO: Created: latency-svc-lhjnk + Mar 7 03:58:46.151: INFO: Got endpoints: latency-svc-m49j8 [751.573479ms] + Mar 7 03:58:46.164: INFO: Created: latency-svc-spg6v + Mar 7 03:58:46.199: INFO: Got endpoints: latency-svc-9gcnt [750.036021ms] + Mar 7 03:58:46.211: INFO: Created: latency-svc-bq4sc + Mar 7 03:58:46.250: INFO: Got endpoints: latency-svc-f7l4l [751.264869ms] + Mar 7 03:58:46.266: INFO: Created: latency-svc-b5pgh + Mar 7 03:58:46.300: INFO: Got endpoints: latency-svc-6tqzv [751.287758ms] + Mar 7 03:58:46.312: INFO: Created: latency-svc-nwgcq + Mar 7 03:58:46.348: INFO: Got endpoints: latency-svc-xqng6 [749.493914ms] + Mar 7 03:58:46.360: INFO: Created: latency-svc-52cg7 + Mar 7 03:58:46.399: INFO: Got endpoints: latency-svc-wd62t [750.166489ms] + Mar 7 03:58:46.413: INFO: Created: latency-svc-rpf2n + Mar 7 03:58:46.450: INFO: Got endpoints: latency-svc-8b9gf [749.608226ms] + Mar 7 03:58:46.462: INFO: Created: latency-svc-jtjv2 + Mar 7 03:58:46.498: INFO: Got endpoints: latency-svc-tz689 [749.172994ms] + Mar 7 03:58:46.510: INFO: Created: latency-svc-5wzv7 + Mar 7 03:58:46.550: INFO: Got endpoints: latency-svc-zk877 [750.36914ms] + Mar 7 03:58:46.564: INFO: Created: latency-svc-2wgs8 + Mar 7 03:58:46.599: INFO: Got endpoints: latency-svc-4zg48 [742.06519ms] + Mar 7 03:58:46.611: INFO: Created: latency-svc-qhptc + Mar 7 03:58:46.648: INFO: Got endpoints: latency-svc-jc2tz [749.084628ms] + Mar 7 03:58:46.660: INFO: Created: latency-svc-vx2zz + Mar 7 03:58:46.699: INFO: Got endpoints: latency-svc-9d52h [749.537957ms] + Mar 7 03:58:46.712: INFO: Created: latency-svc-nsl9m + Mar 7 03:58:46.750: INFO: Got endpoints: latency-svc-jztgc [750.122186ms] + Mar 7 03:58:46.769: INFO: Created: latency-svc-4t4gc + Mar 7 03:58:46.805: INFO: Got endpoints: latency-svc-btmbl [755.892849ms] + Mar 7 03:58:46.822: INFO: Created: latency-svc-b2c5j + Mar 7 03:58:46.861: INFO: Got endpoints: latency-svc-lhjnk [761.482419ms] + Mar 7 03:58:46.879: INFO: Created: latency-svc-c49xj + Mar 7 03:58:46.903: INFO: Got endpoints: latency-svc-spg6v [751.815222ms] + Mar 7 03:58:46.918: INFO: Created: latency-svc-pnftv + Mar 7 03:58:46.950: INFO: Got endpoints: latency-svc-bq4sc [751.404336ms] + Mar 7 03:58:46.966: INFO: Created: latency-svc-67cpn + Mar 7 03:58:47.000: INFO: Got endpoints: latency-svc-b5pgh [749.610151ms] + Mar 7 03:58:47.015: INFO: Created: latency-svc-cjszz + Mar 7 03:58:47.051: INFO: Got endpoints: latency-svc-nwgcq [751.46157ms] + Mar 7 03:58:47.067: INFO: Created: latency-svc-7zv7l + Mar 7 03:58:47.100: INFO: Got endpoints: latency-svc-52cg7 [751.430502ms] + Mar 7 03:58:47.114: INFO: Created: latency-svc-mbc48 + Mar 7 03:58:47.150: INFO: Got endpoints: latency-svc-rpf2n [750.632931ms] + Mar 7 03:58:47.164: INFO: Created: latency-svc-shxdb + Mar 7 03:58:47.258: INFO: Got endpoints: latency-svc-jtjv2 [807.84475ms] + Mar 7 03:58:47.286: INFO: Created: latency-svc-wlpcl + Mar 7 03:58:47.286: INFO: Got endpoints: latency-svc-5wzv7 [787.645254ms] + Mar 7 03:58:47.377: INFO: Got endpoints: latency-svc-2wgs8 [826.161522ms] + Mar 7 03:58:47.390: INFO: Got endpoints: latency-svc-qhptc [791.032021ms] + Mar 7 03:58:47.401: INFO: Created: latency-svc-2tvnp + Mar 7 03:58:47.444: INFO: Got endpoints: latency-svc-vx2zz [795.318478ms] + Mar 7 03:58:47.528: INFO: Got endpoints: latency-svc-4t4gc [778.228841ms] + Mar 7 03:58:47.529: INFO: Got endpoints: latency-svc-nsl9m [829.992826ms] + Mar 7 03:58:47.537: INFO: Created: latency-svc-4btk4 + Mar 7 03:58:47.544: INFO: Created: latency-svc-dmjzg + Mar 7 03:58:47.553: INFO: Got endpoints: latency-svc-b2c5j [747.898218ms] + Mar 7 03:58:47.561: INFO: Created: latency-svc-qp2sl + Mar 7 03:58:47.570: INFO: Created: latency-svc-wm7tl + Mar 7 03:58:47.580: INFO: Created: latency-svc-qwl5w + Mar 7 03:58:47.589: INFO: Created: latency-svc-vbcpv + Mar 7 03:58:47.600: INFO: Got endpoints: latency-svc-c49xj [738.418917ms] + Mar 7 03:58:47.614: INFO: Created: latency-svc-zc5vn + Mar 7 03:58:47.650: INFO: Got endpoints: latency-svc-pnftv [747.560506ms] + Mar 7 03:58:47.664: INFO: Created: latency-svc-7m727 + Mar 7 03:58:47.700: INFO: Got endpoints: latency-svc-67cpn [749.442612ms] + Mar 7 03:58:47.749: INFO: Got endpoints: latency-svc-cjszz [749.109978ms] + Mar 7 03:58:47.800: INFO: Got endpoints: latency-svc-7zv7l [748.160261ms] + Mar 7 03:58:47.851: INFO: Got endpoints: latency-svc-mbc48 [751.406308ms] + Mar 7 03:58:47.900: INFO: Got endpoints: latency-svc-shxdb [749.45259ms] + Mar 7 03:58:47.949: INFO: Got endpoints: latency-svc-wlpcl [691.475424ms] + Mar 7 03:58:47.999: INFO: Got endpoints: latency-svc-2tvnp [712.747088ms] + Mar 7 03:58:48.049: INFO: Got endpoints: latency-svc-4btk4 [672.28271ms] + Mar 7 03:58:48.099: INFO: Got endpoints: latency-svc-dmjzg [708.774112ms] + Mar 7 03:58:48.151: INFO: Got endpoints: latency-svc-qp2sl [706.831428ms] + Mar 7 03:58:48.200: INFO: Got endpoints: latency-svc-wm7tl [671.441942ms] + Mar 7 03:58:48.256: INFO: Got endpoints: latency-svc-qwl5w [726.205947ms] + Mar 7 03:58:48.299: INFO: Got endpoints: latency-svc-vbcpv [745.504356ms] + Mar 7 03:58:48.349: INFO: Got endpoints: latency-svc-zc5vn [749.031396ms] + Mar 7 03:58:48.405: INFO: Got endpoints: latency-svc-7m727 [754.896707ms] + Mar 7 03:58:48.405: INFO: Latencies: [31.158168ms 39.011589ms 58.413669ms 83.075363ms 167.184941ms 169.438992ms 170.514993ms 170.759481ms 170.931565ms 171.094617ms 172.015773ms 172.491067ms 173.12267ms 174.294979ms 174.424909ms 175.026081ms 175.687866ms 176.218257ms 176.759899ms 176.973339ms 177.061987ms 177.66831ms 179.021303ms 179.210732ms 179.619966ms 180.840444ms 182.611034ms 192.636759ms 202.635564ms 208.829358ms 221.316519ms 231.453921ms 237.089393ms 245.178097ms 255.250404ms 265.987552ms 266.269168ms 276.62936ms 278.557971ms 281.414232ms 286.129838ms 288.34143ms 288.931582ms 302.848391ms 320.496661ms 349.229337ms 396.87649ms 430.170879ms 465.549472ms 524.081541ms 549.783175ms 591.071492ms 632.473439ms 670.724159ms 671.441942ms 672.28271ms 691.475424ms 706.831428ms 708.774112ms 711.530654ms 712.747088ms 726.205947ms 738.418917ms 738.621167ms 740.845916ms 742.06519ms 742.154741ms 744.053362ms 744.370073ms 744.974491ms 745.029812ms 745.504356ms 746.095325ms 746.333806ms 746.758785ms 746.805264ms 746.905327ms 747.085196ms 747.123108ms 747.266547ms 747.560506ms 747.729685ms 747.898218ms 748.01517ms 748.160261ms 748.188704ms 748.444824ms 748.576838ms 748.584214ms 748.597223ms 748.642331ms 748.788289ms 748.821685ms 749.031396ms 749.046278ms 749.084628ms 749.109978ms 749.172994ms 749.18525ms 749.197646ms 749.249433ms 749.327222ms 749.338884ms 749.375116ms 749.442612ms 749.448263ms 749.45259ms 749.489219ms 749.493914ms 749.496547ms 749.537957ms 749.58895ms 749.591719ms 749.608226ms 749.610151ms 749.793763ms 749.837389ms 749.853201ms 749.856735ms 749.988279ms 750.008953ms 750.017517ms 750.036021ms 750.052264ms 750.077683ms 750.122186ms 750.128858ms 750.136532ms 750.166489ms 750.245362ms 750.288801ms 750.292926ms 750.310832ms 750.333324ms 750.36914ms 750.387618ms 750.437102ms 750.506766ms 750.559324ms 750.632931ms 750.655596ms 750.667797ms 750.752216ms 750.836978ms 750.857541ms 750.907232ms 750.91249ms 750.986577ms 750.994974ms 751.012673ms 751.183664ms 751.253023ms 751.2592ms 751.264869ms 751.287758ms 751.341148ms 751.352551ms 751.404336ms 751.406308ms 751.430502ms 751.443507ms 751.46157ms 751.573479ms 751.815222ms 752.456389ms 752.787601ms 753.302944ms 754.13358ms 754.703853ms 754.896707ms 755.892849ms 756.511533ms 756.618587ms 756.819851ms 757.276755ms 761.482419ms 763.067313ms 773.358114ms 778.228841ms 778.755106ms 787.645254ms 791.032021ms 795.225178ms 795.318478ms 797.856054ms 798.941622ms 799.126442ms 799.264863ms 799.418017ms 800.094388ms 800.395888ms 800.468896ms 801.01353ms 802.592608ms 802.947142ms 804.101809ms 806.197117ms 807.84475ms 826.161522ms 829.992826ms] + Mar 7 03:58:48.405: INFO: 50 %ile: 749.249433ms + Mar 7 03:58:48.405: INFO: 90 %ile: 787.645254ms + Mar 7 03:58:48.405: INFO: 99 %ile: 826.161522ms + Mar 7 03:58:48.405: INFO: Total sample count: 200 + [AfterEach] [sig-network] Service endpoints latency + test/e2e/framework/framework.go:187 + Mar 7 03:58:48.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "svc-latency-9373" for this suite. 03/07/23 03:58:48.412 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:275 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:58:48.419 +Mar 7 03:58:48.419: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:58:48.42 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:58:48.434 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:58:48.436 +[It] works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:275 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation 03/07/23 03:58:48.438 +Mar 7 03:58:48.438: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +Mar 7 03:58:53.996: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 03:59:11.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-40" for this suite. 03/07/23 03:59:11.608 +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","completed":341,"skipped":6252,"failed":0} +------------------------------ +• [SLOW TEST] [23.195 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:275 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:58:48.419 + Mar 7 03:58:48.419: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 03:58:48.42 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:58:48.434 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:58:48.436 + [It] works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:275 + STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation 03/07/23 03:58:48.438 + Mar 7 03:58:48.438: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + Mar 7 03:58:53.996: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 03:59:11.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-publish-openapi-40" for this suite. 03/07/23 03:59:11.608 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:239 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:59:11.615 +Mar 7 03:59:11.616: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename configmap 03/07/23 03:59:11.616 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:59:11.626 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:59:11.628 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:239 +STEP: Creating configMap with name cm-test-opt-del-67d1d58e-e7c7-4209-b95e-75102217d6b6 03/07/23 03:59:11.632 +STEP: Creating configMap with name cm-test-opt-upd-0886f517-ee41-4803-8aae-4a0b2b5b7877 03/07/23 03:59:11.635 +STEP: Creating the pod 03/07/23 03:59:11.638 +Mar 7 03:59:11.644: INFO: Waiting up to 5m0s for pod "pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906" in namespace "configmap-5313" to be "running and ready" +Mar 7 03:59:11.647: INFO: Pod "pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637942ms +Mar 7 03:59:11.647: INFO: The phase of Pod pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906 is Pending, waiting for it to be Running (with Ready = true) +Mar 7 03:59:13.649: INFO: Pod "pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906": Phase="Running", Reason="", readiness=true. Elapsed: 2.005357355s +Mar 7 03:59:13.649: INFO: The phase of Pod pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906 is Running (Ready = true) +Mar 7 03:59:13.650: INFO: Pod "pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906" satisfied condition "running and ready" +STEP: Deleting configmap cm-test-opt-del-67d1d58e-e7c7-4209-b95e-75102217d6b6 03/07/23 03:59:13.684 +STEP: Updating configmap cm-test-opt-upd-0886f517-ee41-4803-8aae-4a0b2b5b7877 03/07/23 03:59:13.69 +STEP: Creating configMap with name cm-test-opt-create-8c3797c9-d673-4ead-b792-d4e2a031a3f8 03/07/23 03:59:13.694 +STEP: waiting to observe update in volume 03/07/23 03:59:13.697 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 +Mar 7 03:59:15.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5313" for this suite. 03/07/23 03:59:15.722 +{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","completed":342,"skipped":6289,"failed":0} +------------------------------ +• [4.111 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:239 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:59:11.615 + Mar 7 03:59:11.616: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename configmap 03/07/23 03:59:11.616 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:59:11.626 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:59:11.628 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:239 + STEP: Creating configMap with name cm-test-opt-del-67d1d58e-e7c7-4209-b95e-75102217d6b6 03/07/23 03:59:11.632 + STEP: Creating configMap with name cm-test-opt-upd-0886f517-ee41-4803-8aae-4a0b2b5b7877 03/07/23 03:59:11.635 + STEP: Creating the pod 03/07/23 03:59:11.638 + Mar 7 03:59:11.644: INFO: Waiting up to 5m0s for pod "pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906" in namespace "configmap-5313" to be "running and ready" + Mar 7 03:59:11.647: INFO: Pod "pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637942ms + Mar 7 03:59:11.647: INFO: The phase of Pod pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906 is Pending, waiting for it to be Running (with Ready = true) + Mar 7 03:59:13.649: INFO: Pod "pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906": Phase="Running", Reason="", readiness=true. Elapsed: 2.005357355s + Mar 7 03:59:13.649: INFO: The phase of Pod pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906 is Running (Ready = true) + Mar 7 03:59:13.650: INFO: Pod "pod-configmaps-062209cf-dff9-4330-a2e1-ebdb5bd1c906" satisfied condition "running and ready" + STEP: Deleting configmap cm-test-opt-del-67d1d58e-e7c7-4209-b95e-75102217d6b6 03/07/23 03:59:13.684 + STEP: Updating configmap cm-test-opt-upd-0886f517-ee41-4803-8aae-4a0b2b5b7877 03/07/23 03:59:13.69 + STEP: Creating configMap with name cm-test-opt-create-8c3797c9-d673-4ead-b792-d4e2a031a3f8 03/07/23 03:59:13.694 + STEP: waiting to observe update in volume 03/07/23 03:59:13.697 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/framework.go:187 + Mar 7 03:59:15.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "configmap-5313" for this suite. 03/07/23 03:59:15.722 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 03:59:15.728 +Mar 7 03:59:15.728: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename cronjob 03/07/23 03:59:15.729 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:59:15.738 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:59:15.742 +[It] should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 +STEP: Creating a ReplaceConcurrent cronjob 03/07/23 03:59:15.744 +STEP: Ensuring a job is scheduled 03/07/23 03:59:15.748 +STEP: Ensuring exactly one is scheduled 03/07/23 04:00:01.751 +STEP: Ensuring exactly one running job exists by listing jobs explicitly 03/07/23 04:00:01.754 +STEP: Ensuring the job is replaced with a new one 03/07/23 04:00:01.757 +STEP: Removing cronjob 03/07/23 04:01:01.762 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 +Mar 7 04:01:01.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-4056" for this suite. 03/07/23 04:01:01.77 +{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","completed":343,"skipped":6314,"failed":0} +------------------------------ +• [SLOW TEST] [106.049 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 03:59:15.728 + Mar 7 03:59:15.728: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename cronjob 03/07/23 03:59:15.729 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 03:59:15.738 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 03:59:15.742 + [It] should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 + STEP: Creating a ReplaceConcurrent cronjob 03/07/23 03:59:15.744 + STEP: Ensuring a job is scheduled 03/07/23 03:59:15.748 + STEP: Ensuring exactly one is scheduled 03/07/23 04:00:01.751 + STEP: Ensuring exactly one running job exists by listing jobs explicitly 03/07/23 04:00:01.754 + STEP: Ensuring the job is replaced with a new one 03/07/23 04:00:01.757 + STEP: Removing cronjob 03/07/23 04:01:01.762 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/framework.go:187 + Mar 7 04:01:01.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "cronjob-4056" for this suite. 03/07/23 04:01:01.77 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:352 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:01:01.78 +Mar 7 04:01:01.780: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename endpointslice 03/07/23 04:01:01.78 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:01:01.8 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:01:01.803 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 +[It] should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:352 +STEP: getting /apis 03/07/23 04:01:01.808 +STEP: getting /apis/discovery.k8s.io 03/07/23 04:01:01.809 +STEP: getting /apis/discovery.k8s.iov1 03/07/23 04:01:01.812 +STEP: creating 03/07/23 04:01:01.813 +STEP: getting 03/07/23 04:01:01.83 +STEP: listing 03/07/23 04:01:01.832 +STEP: watching 03/07/23 04:01:01.834 +Mar 7 04:01:01.834: INFO: starting watch +STEP: cluster-wide listing 03/07/23 04:01:01.835 +STEP: cluster-wide watching 03/07/23 04:01:01.84 +Mar 7 04:01:01.840: INFO: starting watch +STEP: patching 03/07/23 04:01:01.841 +STEP: updating 03/07/23 04:01:01.847 +Mar 7 04:01:01.855: INFO: waiting for watch events with expected annotations +Mar 7 04:01:01.855: INFO: saw patched and updated annotations +STEP: deleting 03/07/23 04:01:01.855 +STEP: deleting a collection 03/07/23 04:01:01.865 +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 +Mar 7 04:01:01.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-1646" for this suite. 03/07/23 04:01:01.881 +{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","completed":344,"skipped":6354,"failed":0} +------------------------------ +• [0.107 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:352 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:01:01.78 + Mar 7 04:01:01.780: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename endpointslice 03/07/23 04:01:01.78 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:01:01.8 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:01:01.803 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:51 + [It] should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:352 + STEP: getting /apis 03/07/23 04:01:01.808 + STEP: getting /apis/discovery.k8s.io 03/07/23 04:01:01.809 + STEP: getting /apis/discovery.k8s.iov1 03/07/23 04:01:01.812 + STEP: creating 03/07/23 04:01:01.813 + STEP: getting 03/07/23 04:01:01.83 + STEP: listing 03/07/23 04:01:01.832 + STEP: watching 03/07/23 04:01:01.834 + Mar 7 04:01:01.834: INFO: starting watch + STEP: cluster-wide listing 03/07/23 04:01:01.835 + STEP: cluster-wide watching 03/07/23 04:01:01.84 + Mar 7 04:01:01.840: INFO: starting watch + STEP: patching 03/07/23 04:01:01.841 + STEP: updating 03/07/23 04:01:01.847 + Mar 7 04:01:01.855: INFO: waiting for watch events with expected annotations + Mar 7 04:01:01.855: INFO: saw patched and updated annotations + STEP: deleting 03/07/23 04:01:01.855 + STEP: deleting a collection 03/07/23 04:01:01.865 + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/framework.go:187 + Mar 7 04:01:01.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "endpointslice-1646" for this suite. 03/07/23 04:01:01.881 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:322 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:01:01.887 +Mar 7 04:01:01.887: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 04:01:01.888 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:01:01.898 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:01:01.9 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 04:01:01.912 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 04:01:02.449 +STEP: Deploying the webhook pod 03/07/23 04:01:02.455 +STEP: Wait for the deployment to be ready 03/07/23 04:01:02.475 +Mar 7 04:01:02.482: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 04:01:04.491 +STEP: Verifying the service has paired with the endpoint 03/07/23 04:01:04.503 +Mar 7 04:01:05.503: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:322 +Mar 7 04:01:05.507: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9305-crds.webhook.example.com via the AdmissionRegistration API 03/07/23 04:01:06.018 +Mar 7 04:01:06.035: INFO: Waiting for webhook configuration to be ready... +STEP: Creating a custom resource while v1 is storage version 03/07/23 04:01:06.143 +STEP: Patching Custom Resource Definition to set v2 as storage 03/07/23 04:01:08.204 +STEP: Patching the custom resource while v2 is storage version 03/07/23 04:01:08.217 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 04:01:08.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1987" for this suite. 03/07/23 04:01:08.788 +STEP: Destroying namespace "webhook-1987-markers" for this suite. 03/07/23 04:01:08.794 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","completed":345,"skipped":6392,"failed":0} +------------------------------ +• [SLOW TEST] [6.993 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:322 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:01:01.887 + Mar 7 04:01:01.887: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 04:01:01.888 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:01:01.898 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:01:01.9 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 04:01:01.912 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 04:01:02.449 + STEP: Deploying the webhook pod 03/07/23 04:01:02.455 + STEP: Wait for the deployment to be ready 03/07/23 04:01:02.475 + Mar 7 04:01:02.482: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 04:01:04.491 + STEP: Verifying the service has paired with the endpoint 03/07/23 04:01:04.503 + Mar 7 04:01:05.503: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:322 + Mar 7 04:01:05.507: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9305-crds.webhook.example.com via the AdmissionRegistration API 03/07/23 04:01:06.018 + Mar 7 04:01:06.035: INFO: Waiting for webhook configuration to be ready... + STEP: Creating a custom resource while v1 is storage version 03/07/23 04:01:06.143 + STEP: Patching Custom Resource Definition to set v2 as storage 03/07/23 04:01:08.204 + STEP: Patching the custom resource while v2 is storage version 03/07/23 04:01:08.217 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 04:01:08.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-1987" for this suite. 03/07/23 04:01:08.788 + STEP: Destroying namespace "webhook-1987-markers" for this suite. 03/07/23 04:01:08.794 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 +[BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:01:08.88 +Mar 7 04:01:08.881: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename dns 03/07/23 04:01:08.881 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:01:08.903 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:01:08.912 +[It] should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 +STEP: Creating a test headless service 03/07/23 04:01:08.922 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local;sleep 1; done + 03/07/23 04:01:08.937 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local;sleep 1; done + 03/07/23 04:01:08.937 +STEP: creating a pod to probe DNS 03/07/23 04:01:08.937 +STEP: submitting the pod to kubernetes 03/07/23 04:01:08.937 +Mar 7 04:01:08.960: INFO: Waiting up to 15m0s for pod "dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627" in namespace "dns-4800" to be "running" +Mar 7 04:01:08.965: INFO: Pod "dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402355ms +Mar 7 04:01:10.969: INFO: Pod "dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627": Phase="Running", Reason="", readiness=true. Elapsed: 2.008541833s +Mar 7 04:01:10.969: INFO: Pod "dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627" satisfied condition "running" +STEP: retrieving the pod 03/07/23 04:01:10.969 +STEP: looking for the results for each expected name from probers 03/07/23 04:01:10.972 +Mar 7 04:01:10.975: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:10.980: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:10.989: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:10.991: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:10.996: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:10.999: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:11.002: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:11.007: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:11.007: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + +Mar 7 04:01:16.011: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:16.014: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:16.016: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:16.019: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:16.021: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:16.023: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:16.026: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:16.029: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:16.029: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + +Mar 7 04:01:21.012: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:21.015: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:21.017: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:21.019: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:21.022: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:21.024: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:21.027: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:21.029: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:21.029: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + +Mar 7 04:01:26.013: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:26.016: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:26.018: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:26.020: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:26.022: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:26.025: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:26.027: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:26.029: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:26.029: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + +Mar 7 04:01:31.012: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:31.015: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:31.018: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:31.020: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:31.022: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:31.031: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:31.034: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:31.036: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:31.036: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + +Mar 7 04:01:36.010: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:36.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:36.015: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:36.018: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:36.020: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:36.022: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:36.024: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:36.027: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:36.027: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + +Mar 7 04:01:41.011: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:41.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:41.019: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) +Mar 7 04:01:41.028: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + +Mar 7 04:01:46.027: INFO: DNS probes using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 succeeded + +STEP: deleting the pod 03/07/23 04:01:46.027 +STEP: deleting the test headless service 03/07/23 04:01:46.037 +[AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 +Mar 7 04:01:46.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-4800" for this suite. 03/07/23 04:01:46.076 +{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","completed":346,"skipped":6397,"failed":0} +------------------------------ +• [SLOW TEST] [37.207 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:01:08.88 + Mar 7 04:01:08.881: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename dns 03/07/23 04:01:08.881 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:01:08.903 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:01:08.912 + [It] should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 + STEP: Creating a test headless service 03/07/23 04:01:08.922 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local;sleep 1; done + 03/07/23 04:01:08.937 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4800.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local;sleep 1; done + 03/07/23 04:01:08.937 + STEP: creating a pod to probe DNS 03/07/23 04:01:08.937 + STEP: submitting the pod to kubernetes 03/07/23 04:01:08.937 + Mar 7 04:01:08.960: INFO: Waiting up to 15m0s for pod "dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627" in namespace "dns-4800" to be "running" + Mar 7 04:01:08.965: INFO: Pod "dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402355ms + Mar 7 04:01:10.969: INFO: Pod "dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627": Phase="Running", Reason="", readiness=true. Elapsed: 2.008541833s + Mar 7 04:01:10.969: INFO: Pod "dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627" satisfied condition "running" + STEP: retrieving the pod 03/07/23 04:01:10.969 + STEP: looking for the results for each expected name from probers 03/07/23 04:01:10.972 + Mar 7 04:01:10.975: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:10.980: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:10.989: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:10.991: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:10.996: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:10.999: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:11.002: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:11.007: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:11.007: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + + Mar 7 04:01:16.011: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:16.014: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:16.016: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:16.019: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:16.021: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:16.023: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:16.026: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:16.029: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:16.029: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + + Mar 7 04:01:21.012: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:21.015: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:21.017: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:21.019: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:21.022: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:21.024: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:21.027: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:21.029: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:21.029: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + + Mar 7 04:01:26.013: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:26.016: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:26.018: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:26.020: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:26.022: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:26.025: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:26.027: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:26.029: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:26.029: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + + Mar 7 04:01:31.012: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:31.015: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:31.018: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:31.020: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:31.022: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:31.031: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:31.034: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:31.036: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:31.036: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + + Mar 7 04:01:36.010: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:36.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:36.015: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:36.018: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:36.020: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:36.022: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:36.024: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:36.027: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:36.027: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local jessie_udp@dns-test-service-2.dns-4800.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + + Mar 7 04:01:41.011: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:41.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:41.019: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local from pod dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627: the server could not find the requested resource (get pods dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627) + Mar 7 04:01:41.028: INFO: Lookups using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4800.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4800.svc.cluster.local] + + Mar 7 04:01:46.027: INFO: DNS probes using dns-4800/dns-test-8f00872e-67ed-4141-a842-d1c8ca9aa627 succeeded + + STEP: deleting the pod 03/07/23 04:01:46.027 + STEP: deleting the test headless service 03/07/23 04:01:46.037 + [AfterEach] [sig-network] DNS + test/e2e/framework/framework.go:187 + Mar 7 04:01:46.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "dns-4800" for this suite. 03/07/23 04:01:46.076 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:316 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:01:46.088 +Mar 7 04:01:46.088: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename resourcequota 03/07/23 04:01:46.089 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:01:46.101 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:01:46.103 +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:316 +STEP: Counting existing ResourceQuota 03/07/23 04:02:03.113 +STEP: Creating a ResourceQuota 03/07/23 04:02:08.116 +STEP: Ensuring resource quota status is calculated 03/07/23 04:02:08.142 +STEP: Creating a ConfigMap 03/07/23 04:02:10.146 +STEP: Ensuring resource quota status captures configMap creation 03/07/23 04:02:10.155 +STEP: Deleting a ConfigMap 03/07/23 04:02:12.159 +STEP: Ensuring resource quota status released usage 03/07/23 04:02:12.186 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 +Mar 7 04:02:14.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4458" for this suite. 03/07/23 04:02:14.193 +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","completed":347,"skipped":6401,"failed":0} +------------------------------ +• [SLOW TEST] [28.139 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:316 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:01:46.088 + Mar 7 04:01:46.088: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename resourcequota 03/07/23 04:01:46.089 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:01:46.101 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:01:46.103 + [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:316 + STEP: Counting existing ResourceQuota 03/07/23 04:02:03.113 + STEP: Creating a ResourceQuota 03/07/23 04:02:08.116 + STEP: Ensuring resource quota status is calculated 03/07/23 04:02:08.142 + STEP: Creating a ConfigMap 03/07/23 04:02:10.146 + STEP: Ensuring resource quota status captures configMap creation 03/07/23 04:02:10.155 + STEP: Deleting a ConfigMap 03/07/23 04:02:12.159 + STEP: Ensuring resource quota status released usage 03/07/23 04:02:12.186 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/framework.go:187 + Mar 7 04:02:14.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "resourcequota-4458" for this suite. 03/07/23 04:02:14.193 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:107 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:02:14.227 +Mar 7 04:02:14.227: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename disruption 03/07/23 04:02:14.229 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:14.241 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:14.243 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 +[It] should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:107 +STEP: creating the pdb 03/07/23 04:02:14.247 +STEP: Waiting for the pdb to be processed 03/07/23 04:02:14.252 +STEP: updating the pdb 03/07/23 04:02:16.284 +STEP: Waiting for the pdb to be processed 03/07/23 04:02:16.29 +STEP: patching the pdb 03/07/23 04:02:18.295 +STEP: Waiting for the pdb to be processed 03/07/23 04:02:18.302 +STEP: Waiting for the pdb to be deleted 03/07/23 04:02:20.311 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 +Mar 7 04:02:20.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-9685" for this suite. 03/07/23 04:02:20.317 +{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","completed":348,"skipped":6403,"failed":0} +------------------------------ +• [SLOW TEST] [6.094 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:107 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:02:14.227 + Mar 7 04:02:14.227: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename disruption 03/07/23 04:02:14.229 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:14.241 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:14.243 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:71 + [It] should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:107 + STEP: creating the pdb 03/07/23 04:02:14.247 + STEP: Waiting for the pdb to be processed 03/07/23 04:02:14.252 + STEP: updating the pdb 03/07/23 04:02:16.284 + STEP: Waiting for the pdb to be processed 03/07/23 04:02:16.29 + STEP: patching the pdb 03/07/23 04:02:18.295 + STEP: Waiting for the pdb to be processed 03/07/23 04:02:18.302 + STEP: Waiting for the pdb to be deleted 03/07/23 04:02:20.311 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/framework.go:187 + Mar 7 04:02:20.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "disruption-9685" for this suite. 03/07/23 04:02:20.317 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:02:20.322 +Mar 7 04:02:20.322: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename certificates 03/07/23 04:02:20.323 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:20.336 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:20.338 +[It] should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 +STEP: getting /apis 03/07/23 04:02:21.037 +STEP: getting /apis/certificates.k8s.io 03/07/23 04:02:21.039 +STEP: getting /apis/certificates.k8s.io/v1 03/07/23 04:02:21.039 +STEP: creating 03/07/23 04:02:21.04 +STEP: getting 03/07/23 04:02:21.056 +STEP: listing 03/07/23 04:02:21.059 +STEP: watching 03/07/23 04:02:21.061 +Mar 7 04:02:21.061: INFO: starting watch +STEP: patching 03/07/23 04:02:21.062 +STEP: updating 03/07/23 04:02:21.066 +Mar 7 04:02:21.070: INFO: waiting for watch events with expected annotations +Mar 7 04:02:21.070: INFO: saw patched and updated annotations +STEP: getting /approval 03/07/23 04:02:21.07 +STEP: patching /approval 03/07/23 04:02:21.072 +STEP: updating /approval 03/07/23 04:02:21.076 +STEP: getting /status 03/07/23 04:02:21.08 +STEP: patching /status 03/07/23 04:02:21.082 +STEP: updating /status 03/07/23 04:02:21.087 +STEP: deleting 03/07/23 04:02:21.099 +STEP: deleting a collection 03/07/23 04:02:21.107 +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 04:02:21.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-6681" for this suite. 03/07/23 04:02:21.12 +{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","completed":349,"skipped":6407,"failed":0} +------------------------------ +• [0.802 seconds] +[sig-auth] Certificates API [Privileged:ClusterAdmin] +test/e2e/auth/framework.go:23 + should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:02:20.322 + Mar 7 04:02:20.322: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename certificates 03/07/23 04:02:20.323 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:20.336 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:20.338 + [It] should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 + STEP: getting /apis 03/07/23 04:02:21.037 + STEP: getting /apis/certificates.k8s.io 03/07/23 04:02:21.039 + STEP: getting /apis/certificates.k8s.io/v1 03/07/23 04:02:21.039 + STEP: creating 03/07/23 04:02:21.04 + STEP: getting 03/07/23 04:02:21.056 + STEP: listing 03/07/23 04:02:21.059 + STEP: watching 03/07/23 04:02:21.061 + Mar 7 04:02:21.061: INFO: starting watch + STEP: patching 03/07/23 04:02:21.062 + STEP: updating 03/07/23 04:02:21.066 + Mar 7 04:02:21.070: INFO: waiting for watch events with expected annotations + Mar 7 04:02:21.070: INFO: saw patched and updated annotations + STEP: getting /approval 03/07/23 04:02:21.07 + STEP: patching /approval 03/07/23 04:02:21.072 + STEP: updating /approval 03/07/23 04:02:21.076 + STEP: getting /status 03/07/23 04:02:21.08 + STEP: patching /status 03/07/23 04:02:21.082 + STEP: updating /status 03/07/23 04:02:21.087 + STEP: deleting 03/07/23 04:02:21.099 + STEP: deleting a collection 03/07/23 04:02:21.107 + [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 04:02:21.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "certificates-6681" for this suite. 03/07/23 04:02:21.12 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:390 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:02:21.125 +Mar 7 04:02:21.125: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 04:02:21.126 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:21.136 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:21.138 +[It] updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:390 +STEP: set up a multi version CRD 03/07/23 04:02:21.14 +Mar 7 04:02:21.140: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: rename a version 03/07/23 04:02:32.312 +STEP: check the new version name is served 03/07/23 04:02:32.324 +STEP: check the old version name is removed 03/07/23 04:02:36.907 +STEP: check the other version is not changed 03/07/23 04:02:38.786 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 04:02:47.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-2012" for this suite. 03/07/23 04:02:47.628 +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","completed":350,"skipped":6436,"failed":0} +------------------------------ +• [SLOW TEST] [26.509 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:390 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:02:21.125 + Mar 7 04:02:21.125: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename crd-publish-openapi 03/07/23 04:02:21.126 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:21.136 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:21.138 + [It] updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:390 + STEP: set up a multi version CRD 03/07/23 04:02:21.14 + Mar 7 04:02:21.140: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: rename a version 03/07/23 04:02:32.312 + STEP: check the new version name is served 03/07/23 04:02:32.324 + STEP: check the old version name is removed 03/07/23 04:02:36.907 + STEP: check the other version is not changed 03/07/23 04:02:38.786 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 04:02:47.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "crd-publish-openapi-2012" for this suite. 03/07/23 04:02:47.628 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:82 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:02:47.635 +Mar 7 04:02:47.635: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replication-controller 03/07/23 04:02:47.635 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:47.646 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:47.648 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:82 +Mar 7 04:02:47.650: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota 03/07/23 04:02:48.659 +STEP: Checking rc "condition-test" has the desired failure condition set 03/07/23 04:02:48.663 +STEP: Scaling down rc "condition-test" to satisfy pod quota 03/07/23 04:02:49.669 +Mar 7 04:02:49.675: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set 03/07/23 04:02:49.675 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 +Mar 7 04:02:50.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-1305" for this suite. 03/07/23 04:02:50.689 +{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","completed":351,"skipped":6464,"failed":0} +------------------------------ +• [3.065 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:82 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:02:47.635 + Mar 7 04:02:47.635: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replication-controller 03/07/23 04:02:47.635 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:47.646 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:47.648 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:56 + [It] should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:82 + Mar 7 04:02:47.650: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace + STEP: Creating rc "condition-test" that asks for more than the allowed pod quota 03/07/23 04:02:48.659 + STEP: Checking rc "condition-test" has the desired failure condition set 03/07/23 04:02:48.663 + STEP: Scaling down rc "condition-test" to satisfy pod quota 03/07/23 04:02:49.669 + Mar 7 04:02:49.675: INFO: Updating replication controller "condition-test" + STEP: Checking rc "condition-test" has no failure condition set 03/07/23 04:02:49.675 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/framework.go:187 + Mar 7 04:02:50.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replication-controller-1305" for this suite. 03/07/23 04:02:50.689 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:535 +[BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:02:50.7 +Mar 7 04:02:50.700: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 04:02:50.701 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:50.714 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:50.717 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:535 +Mar 7 04:02:50.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: creating the pod 03/07/23 04:02:50.719 +STEP: submitting the pod to kubernetes 03/07/23 04:02:50.719 +Mar 7 04:02:50.725: INFO: Waiting up to 5m0s for pod "pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf" in namespace "pods-5473" to be "running and ready" +Mar 7 04:02:50.727: INFO: Pod "pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf": Phase="Pending", Reason="", readiness=false. Elapsed: 1.899778ms +Mar 7 04:02:50.727: INFO: The phase of Pod pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf is Pending, waiting for it to be Running (with Ready = true) +Mar 7 04:02:52.731: INFO: Pod "pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf": Phase="Running", Reason="", readiness=true. Elapsed: 2.005225943s +Mar 7 04:02:52.731: INFO: The phase of Pod pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf is Running (Ready = true) +Mar 7 04:02:52.731: INFO: Pod "pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf" satisfied condition "running and ready" +[AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 +Mar 7 04:02:52.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5473" for this suite. 03/07/23 04:02:52.841 +{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","completed":352,"skipped":6490,"failed":0} +------------------------------ +• [2.146 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:535 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:02:50.7 + Mar 7 04:02:50.700: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 04:02:50.701 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:50.714 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:50.717 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:193 + [It] should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:535 + Mar 7 04:02:50.718: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: creating the pod 03/07/23 04:02:50.719 + STEP: submitting the pod to kubernetes 03/07/23 04:02:50.719 + Mar 7 04:02:50.725: INFO: Waiting up to 5m0s for pod "pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf" in namespace "pods-5473" to be "running and ready" + Mar 7 04:02:50.727: INFO: Pod "pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf": Phase="Pending", Reason="", readiness=false. Elapsed: 1.899778ms + Mar 7 04:02:50.727: INFO: The phase of Pod pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf is Pending, waiting for it to be Running (with Ready = true) + Mar 7 04:02:52.731: INFO: Pod "pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf": Phase="Running", Reason="", readiness=true. Elapsed: 2.005225943s + Mar 7 04:02:52.731: INFO: The phase of Pod pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf is Running (Ready = true) + Mar 7 04:02:52.731: INFO: Pod "pod-exec-websocket-076bacdd-8355-444d-8a59-394177bbdddf" satisfied condition "running and ready" + [AfterEach] [sig-node] Pods + test/e2e/framework/framework.go:187 + Mar 7 04:02:52.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-5473" for this suite. 03/07/23 04:02:52.841 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:148 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:02:52.848 +Mar 7 04:02:52.848: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-probe 03/07/23 04:02:52.849 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:52.861 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:52.864 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:148 +STEP: Creating pod busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b in namespace container-probe-772 03/07/23 04:02:52.866 +Mar 7 04:02:52.873: INFO: Waiting up to 5m0s for pod "busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b" in namespace "container-probe-772" to be "not pending" +Mar 7 04:02:52.878: INFO: Pod "busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.965998ms +Mar 7 04:02:54.881: INFO: Pod "busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b": Phase="Running", Reason="", readiness=true. Elapsed: 2.00800292s +Mar 7 04:02:54.881: INFO: Pod "busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b" satisfied condition "not pending" +Mar 7 04:02:54.881: INFO: Started pod busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b in namespace container-probe-772 +STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 04:02:54.881 +Mar 7 04:02:54.883: INFO: Initial restart count of pod busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b is 0 +STEP: deleting the pod 03/07/23 04:06:55.366 +[AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 +Mar 7 04:06:55.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-772" for this suite. 03/07/23 04:06:55.389 +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","completed":353,"skipped":6521,"failed":0} +------------------------------ +• [SLOW TEST] [242.558 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:148 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:02:52.848 + Mar 7 04:02:52.848: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-probe 03/07/23 04:02:52.849 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:02:52.861 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:02:52.864 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:59 + [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:148 + STEP: Creating pod busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b in namespace container-probe-772 03/07/23 04:02:52.866 + Mar 7 04:02:52.873: INFO: Waiting up to 5m0s for pod "busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b" in namespace "container-probe-772" to be "not pending" + Mar 7 04:02:52.878: INFO: Pod "busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.965998ms + Mar 7 04:02:54.881: INFO: Pod "busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b": Phase="Running", Reason="", readiness=true. Elapsed: 2.00800292s + Mar 7 04:02:54.881: INFO: Pod "busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b" satisfied condition "not pending" + Mar 7 04:02:54.881: INFO: Started pod busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b in namespace container-probe-772 + STEP: checking the pod's current state and verifying that restartCount is present 03/07/23 04:02:54.881 + Mar 7 04:02:54.883: INFO: Initial restart count of pod busybox-1fd9ddce-4083-40c6-b088-8196c9f74d5b is 0 + STEP: deleting the pod 03/07/23 04:06:55.366 + [AfterEach] [sig-node] Probing container + test/e2e/framework/framework.go:187 + Mar 7 04:06:55.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-probe-772" for this suite. 03/07/23 04:06:55.389 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:06:55.407 +Mar 7 04:06:55.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename deployment 03/07/23 04:06:55.408 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:06:55.434 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:06:55.436 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 +Mar 7 04:06:55.448: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Mar 7 04:07:00.452: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 03/07/23 04:07:00.452 +Mar 7 04:07:00.452: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Mar 7 04:07:02.455: INFO: Creating deployment "test-rollover-deployment" +Mar 7 04:07:02.460: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Mar 7 04:07:04.465: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Mar 7 04:07:04.471: INFO: Ensure that both replica sets have 1 created replica +Mar 7 04:07:04.474: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Mar 7 04:07:04.482: INFO: Updating deployment test-rollover-deployment +Mar 7 04:07:04.482: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Mar 7 04:07:06.490: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Mar 7 04:07:06.495: INFO: Make sure deployment "test-rollover-deployment" is complete +Mar 7 04:07:06.500: INFO: all replica sets need to contain the pod-template-hash label +Mar 7 04:07:06.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6d45fd857b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 04:07:08.507: INFO: all replica sets need to contain the pod-template-hash label +Mar 7 04:07:08.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6d45fd857b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 04:07:10.506: INFO: all replica sets need to contain the pod-template-hash label +Mar 7 04:07:10.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6d45fd857b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 04:07:12.506: INFO: all replica sets need to contain the pod-template-hash label +Mar 7 04:07:12.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6d45fd857b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 04:07:14.505: INFO: all replica sets need to contain the pod-template-hash label +Mar 7 04:07:14.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6d45fd857b\" is progressing."}}, CollisionCount:(*int32)(nil)} +Mar 7 04:07:16.505: INFO: +Mar 7 04:07:16.505: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Mar 7 04:07:16.511: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-1597 e0693e11-eca6-4ddd-82a1-ba7a59a35907 84107 2 2023-03-07 04:07:02 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 04:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001627f78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-03-07 04:07:02 +0000 UTC,LastTransitionTime:2023-03-07 04:07:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6d45fd857b" has successfully progressed.,LastUpdateTime:2023-03-07 04:07:15 +0000 UTC,LastTransitionTime:2023-03-07 04:07:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Mar 7 04:07:16.514: INFO: New ReplicaSet "test-rollover-deployment-6d45fd857b" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-6d45fd857b deployment-1597 e72f6b00-6537-45dd-9363-adaa157c79ad 84097 2 2023-03-07 04:07:04 +0000 UTC map[name:rollover-pod pod-template-hash:6d45fd857b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e0693e11-eca6-4ddd-82a1-ba7a59a35907 0xc005555a67 0xc005555a68}] [] [{kube-controller-manager Update apps/v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0693e11-eca6-4ddd-82a1-ba7a59a35907\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 04:07:15 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6d45fd857b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6d45fd857b] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005555b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Mar 7 04:07:16.514: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Mar 7 04:07:16.514: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1597 809b4c5a-8e82-4dfb-a3d1-40f6c8cc1226 84106 2 2023-03-07 04:06:55 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e0693e11-eca6-4ddd-82a1-ba7a59a35907 0xc005555817 0xc005555818}] [] [{e2e.test Update apps/v1 2023-03-07 04:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 04:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0693e11-eca6-4ddd-82a1-ba7a59a35907\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-03-07 04:07:15 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0055558d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Mar 7 04:07:16.514: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-59b9df946d deployment-1597 57f008cc-b397-4cec-9eef-458acf8cef4e 84032 2 2023-03-07 04:07:02 +0000 UTC map[name:rollover-pod pod-template-hash:59b9df946d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e0693e11-eca6-4ddd-82a1-ba7a59a35907 0xc005555947 0xc005555948}] [] [{kube-controller-manager Update apps/v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0693e11-eca6-4ddd-82a1-ba7a59a35907\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 59b9df946d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:59b9df946d] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055559f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Mar 7 04:07:16.516: INFO: Pod "test-rollover-deployment-6d45fd857b-lnnz8" is available: +&Pod{ObjectMeta:{test-rollover-deployment-6d45fd857b-lnnz8 test-rollover-deployment-6d45fd857b- deployment-1597 6f06b8e5-3b92-4662-a3b1-a649e3d1e11e 84052 0 2023-03-07 04:07:04 +0000 UTC map[name:rollover-pod pod-template-hash:6d45fd857b] map[cni.projectcalico.org/containerID:3803ab4afb988154350055681fe89d35a9a8056f64d5931745192bd1287160d0 cni.projectcalico.org/podIP:10.233.247.9/32 cni.projectcalico.org/podIPs:10.233.247.9/32] [{apps/v1 ReplicaSet test-rollover-deployment-6d45fd857b e72f6b00-6537-45dd-9363-adaa157c79ad 0xc004000fd7 0xc004000fd8}] [] [{calico Update v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f6b00-6537-45dd-9363-adaa157c79ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 04:07:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-54ln7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.40,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54ln7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 04:07:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 04:07:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 04:07:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 04:07:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.9,StartTime:2023-03-07 04:07:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 04:07:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.40,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146,ContainerID:containerd://694ef462febc165b657ee49f18aa1b4bcf4b48c60054c4667b133efa091893e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 +Mar 7 04:07:16.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1597" for this suite. 03/07/23 04:07:16.519 +{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","completed":354,"skipped":6524,"failed":0} +------------------------------ +• [SLOW TEST] [21.144 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:06:55.407 + Mar 7 04:06:55.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename deployment 03/07/23 04:06:55.408 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:06:55.434 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:06:55.436 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 + Mar 7 04:06:55.448: INFO: Pod name rollover-pod: Found 0 pods out of 1 + Mar 7 04:07:00.452: INFO: Pod name rollover-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 03/07/23 04:07:00.452 + Mar 7 04:07:00.452: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready + Mar 7 04:07:02.455: INFO: Creating deployment "test-rollover-deployment" + Mar 7 04:07:02.460: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations + Mar 7 04:07:04.465: INFO: Check revision of new replica set for deployment "test-rollover-deployment" + Mar 7 04:07:04.471: INFO: Ensure that both replica sets have 1 created replica + Mar 7 04:07:04.474: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update + Mar 7 04:07:04.482: INFO: Updating deployment test-rollover-deployment + Mar 7 04:07:04.482: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller + Mar 7 04:07:06.490: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 + Mar 7 04:07:06.495: INFO: Make sure deployment "test-rollover-deployment" is complete + Mar 7 04:07:06.500: INFO: all replica sets need to contain the pod-template-hash label + Mar 7 04:07:06.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6d45fd857b\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 04:07:08.507: INFO: all replica sets need to contain the pod-template-hash label + Mar 7 04:07:08.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6d45fd857b\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 04:07:10.506: INFO: all replica sets need to contain the pod-template-hash label + Mar 7 04:07:10.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6d45fd857b\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 04:07:12.506: INFO: all replica sets need to contain the pod-template-hash label + Mar 7 04:07:12.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6d45fd857b\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 04:07:14.505: INFO: all replica sets need to contain the pod-template-hash label + Mar 7 04:07:14.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 7, 4, 7, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 7, 4, 7, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6d45fd857b\" is progressing."}}, CollisionCount:(*int32)(nil)} + Mar 7 04:07:16.505: INFO: + Mar 7 04:07:16.505: INFO: Ensure that both old replica sets have no replicas + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Mar 7 04:07:16.511: INFO: Deployment "test-rollover-deployment": + &Deployment{ObjectMeta:{test-rollover-deployment deployment-1597 e0693e11-eca6-4ddd-82a1-ba7a59a35907 84107 2 2023-03-07 04:07:02 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 04:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001627f78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-03-07 04:07:02 +0000 UTC,LastTransitionTime:2023-03-07 04:07:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6d45fd857b" has successfully progressed.,LastUpdateTime:2023-03-07 04:07:15 +0000 UTC,LastTransitionTime:2023-03-07 04:07:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Mar 7 04:07:16.514: INFO: New ReplicaSet "test-rollover-deployment-6d45fd857b" of Deployment "test-rollover-deployment": + &ReplicaSet{ObjectMeta:{test-rollover-deployment-6d45fd857b deployment-1597 e72f6b00-6537-45dd-9363-adaa157c79ad 84097 2 2023-03-07 04:07:04 +0000 UTC map[name:rollover-pod pod-template-hash:6d45fd857b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e0693e11-eca6-4ddd-82a1-ba7a59a35907 0xc005555a67 0xc005555a68}] [] [{kube-controller-manager Update apps/v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0693e11-eca6-4ddd-82a1-ba7a59a35907\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 04:07:15 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6d45fd857b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6d45fd857b] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.40 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005555b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Mar 7 04:07:16.514: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": + Mar 7 04:07:16.514: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1597 809b4c5a-8e82-4dfb-a3d1-40f6c8cc1226 84106 2 2023-03-07 04:06:55 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e0693e11-eca6-4ddd-82a1-ba7a59a35907 0xc005555817 0xc005555818}] [] [{e2e.test Update apps/v1 2023-03-07 04:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 04:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0693e11-eca6-4ddd-82a1-ba7a59a35907\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-03-07 04:07:15 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0055558d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Mar 7 04:07:16.514: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-59b9df946d deployment-1597 57f008cc-b397-4cec-9eef-458acf8cef4e 84032 2 2023-03-07 04:07:02 +0000 UTC map[name:rollover-pod pod-template-hash:59b9df946d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e0693e11-eca6-4ddd-82a1-ba7a59a35907 0xc005555947 0xc005555948}] [] [{kube-controller-manager Update apps/v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0693e11-eca6-4ddd-82a1-ba7a59a35907\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 59b9df946d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:59b9df946d] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055559f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Mar 7 04:07:16.516: INFO: Pod "test-rollover-deployment-6d45fd857b-lnnz8" is available: + &Pod{ObjectMeta:{test-rollover-deployment-6d45fd857b-lnnz8 test-rollover-deployment-6d45fd857b- deployment-1597 6f06b8e5-3b92-4662-a3b1-a649e3d1e11e 84052 0 2023-03-07 04:07:04 +0000 UTC map[name:rollover-pod pod-template-hash:6d45fd857b] map[cni.projectcalico.org/containerID:3803ab4afb988154350055681fe89d35a9a8056f64d5931745192bd1287160d0 cni.projectcalico.org/podIP:10.233.247.9/32 cni.projectcalico.org/podIPs:10.233.247.9/32] [{apps/v1 ReplicaSet test-rollover-deployment-6d45fd857b e72f6b00-6537-45dd-9363-adaa157c79ad 0xc004000fd7 0xc004000fd8}] [] [{calico Update v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-03-07 04:07:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f6b00-6537-45dd-9363-adaa157c79ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-07 04:07:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.233.247.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-54ln7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.40,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54ln7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 04:07:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 04:07:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 04:07:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-07 04:07:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.102,PodIP:10.233.247.9,StartTime:2023-03-07 04:07:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-07 04:07:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.40,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146,ContainerID:containerd://694ef462febc165b657ee49f18aa1b4bcf4b48c60054c4667b133efa091893e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.233.247.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/framework.go:187 + Mar 7 04:07:16.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "deployment-1597" for this suite. 03/07/23 04:07:16.519 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Kubelet when scheduling an agnhost Pod with hostAliases + should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:07:16.552 +Mar 7 04:07:16.552: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename kubelet-test 03/07/23 04:07:16.553 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:16.566 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:16.569 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 +STEP: Waiting for pod completion 03/07/23 04:07:16.578 +Mar 7 04:07:16.578: INFO: Waiting up to 3m0s for pod "agnhost-host-aliases4e5b7128-780c-45ec-89f7-e268d100e8ed" in namespace "kubelet-test-5314" to be "completed" +Mar 7 04:07:16.580: INFO: Pod "agnhost-host-aliases4e5b7128-780c-45ec-89f7-e268d100e8ed": Phase="Pending", Reason="", readiness=false. Elapsed: 1.897116ms +Mar 7 04:07:18.583: INFO: Pod "agnhost-host-aliases4e5b7128-780c-45ec-89f7-e268d100e8ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005475105s +Mar 7 04:07:20.584: INFO: Pod "agnhost-host-aliases4e5b7128-780c-45ec-89f7-e268d100e8ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006070681s +Mar 7 04:07:20.584: INFO: Pod "agnhost-host-aliases4e5b7128-780c-45ec-89f7-e268d100e8ed" satisfied condition "completed" +[AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 +Mar 7 04:07:20.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-5314" for this suite. 03/07/23 04:07:20.598 +{"msg":"PASSED [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]","completed":355,"skipped":6529,"failed":0} +------------------------------ +• [4.066 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling an agnhost Pod with hostAliases + test/e2e/common/node/kubelet.go:140 + should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:07:16.552 + Mar 7 04:07:16.552: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename kubelet-test 03/07/23 04:07:16.553 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:16.566 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:16.569 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 + STEP: Waiting for pod completion 03/07/23 04:07:16.578 + Mar 7 04:07:16.578: INFO: Waiting up to 3m0s for pod "agnhost-host-aliases4e5b7128-780c-45ec-89f7-e268d100e8ed" in namespace "kubelet-test-5314" to be "completed" + Mar 7 04:07:16.580: INFO: Pod "agnhost-host-aliases4e5b7128-780c-45ec-89f7-e268d100e8ed": Phase="Pending", Reason="", readiness=false. Elapsed: 1.897116ms + Mar 7 04:07:18.583: INFO: Pod "agnhost-host-aliases4e5b7128-780c-45ec-89f7-e268d100e8ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005475105s + Mar 7 04:07:20.584: INFO: Pod "agnhost-host-aliases4e5b7128-780c-45ec-89f7-e268d100e8ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006070681s + Mar 7 04:07:20.584: INFO: Pod "agnhost-host-aliases4e5b7128-780c-45ec-89f7-e268d100e8ed" satisfied condition "completed" + [AfterEach] [sig-node] Kubelet + test/e2e/framework/framework.go:187 + Mar 7 04:07:20.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "kubelet-test-5314" for this suite. 03/07/23 04:07:20.598 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:52 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:07:20.621 +Mar 7 04:07:20.621: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename downward-api 03/07/23 04:07:20.622 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:20.635 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:20.637 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 +[It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:52 +STEP: Creating a pod to test downward API volume plugin 03/07/23 04:07:20.638 +Mar 7 04:07:20.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374" in namespace "downward-api-318" to be "Succeeded or Failed" +Mar 7 04:07:20.647: INFO: Pod "downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.76514ms +Mar 7 04:07:22.651: INFO: Pod "downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006858693s +Mar 7 04:07:24.651: INFO: Pod "downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006282354s +STEP: Saw pod success 03/07/23 04:07:24.651 +Mar 7 04:07:24.651: INFO: Pod "downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374" satisfied condition "Succeeded or Failed" +Mar 7 04:07:24.654: INFO: Trying to get logs from node node-2 pod downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374 container client-container: +STEP: delete the pod 03/07/23 04:07:24.659 +Mar 7 04:07:24.696: INFO: Waiting for pod downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374 to disappear +Mar 7 04:07:24.698: INFO: Pod downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 +Mar 7 04:07:24.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-318" for this suite. 03/07/23 04:07:24.701 +{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","completed":356,"skipped":6603,"failed":0} +------------------------------ +• [4.084 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:52 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:07:20.621 + Mar 7 04:07:20.621: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename downward-api 03/07/23 04:07:20.622 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:20.635 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:20.637 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:43 + [It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:52 + STEP: Creating a pod to test downward API volume plugin 03/07/23 04:07:20.638 + Mar 7 04:07:20.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374" in namespace "downward-api-318" to be "Succeeded or Failed" + Mar 7 04:07:20.647: INFO: Pod "downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.76514ms + Mar 7 04:07:22.651: INFO: Pod "downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006858693s + Mar 7 04:07:24.651: INFO: Pod "downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006282354s + STEP: Saw pod success 03/07/23 04:07:24.651 + Mar 7 04:07:24.651: INFO: Pod "downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374" satisfied condition "Succeeded or Failed" + Mar 7 04:07:24.654: INFO: Trying to get logs from node node-2 pod downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374 container client-container: + STEP: delete the pod 03/07/23 04:07:24.659 + Mar 7 04:07:24.696: INFO: Waiting for pod downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374 to disappear + Mar 7 04:07:24.698: INFO: Pod downwardapi-volume-1904d34e-9cf5-4132-870d-5978f0ace374 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/framework.go:187 + Mar 7 04:07:24.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "downward-api-318" for this suite. 03/07/23 04:07:24.701 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 +[BeforeEach] [sig-api-machinery] Discovery + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:07:24.706 +Mar 7 04:07:24.706: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename discovery 03/07/23 04:07:24.707 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:24.723 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:24.725 +[BeforeEach] [sig-api-machinery] Discovery + test/e2e/apimachinery/discovery.go:43 +STEP: Setting up server cert 03/07/23 04:07:24.727 +[It] should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 +Mar 7 04:07:25.155: INFO: Checking APIGroup: apiregistration.k8s.io +Mar 7 04:07:25.156: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Mar 7 04:07:25.156: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Mar 7 04:07:25.156: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Mar 7 04:07:25.156: INFO: Checking APIGroup: apps +Mar 7 04:07:25.156: INFO: PreferredVersion.GroupVersion: apps/v1 +Mar 7 04:07:25.156: INFO: Versions found [{apps/v1 v1}] +Mar 7 04:07:25.156: INFO: apps/v1 matches apps/v1 +Mar 7 04:07:25.156: INFO: Checking APIGroup: events.k8s.io +Mar 7 04:07:25.157: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Mar 7 04:07:25.157: INFO: Versions found [{events.k8s.io/v1 v1}] +Mar 7 04:07:25.157: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Mar 7 04:07:25.157: INFO: Checking APIGroup: authentication.k8s.io +Mar 7 04:07:25.158: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Mar 7 04:07:25.158: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Mar 7 04:07:25.158: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Mar 7 04:07:25.158: INFO: Checking APIGroup: authorization.k8s.io +Mar 7 04:07:25.158: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Mar 7 04:07:25.158: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Mar 7 04:07:25.158: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Mar 7 04:07:25.158: INFO: Checking APIGroup: autoscaling +Mar 7 04:07:25.159: INFO: PreferredVersion.GroupVersion: autoscaling/v2 +Mar 7 04:07:25.159: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta2 v2beta2}] +Mar 7 04:07:25.159: INFO: autoscaling/v2 matches autoscaling/v2 +Mar 7 04:07:25.159: INFO: Checking APIGroup: batch +Mar 7 04:07:25.160: INFO: PreferredVersion.GroupVersion: batch/v1 +Mar 7 04:07:25.160: INFO: Versions found [{batch/v1 v1}] +Mar 7 04:07:25.160: INFO: batch/v1 matches batch/v1 +Mar 7 04:07:25.160: INFO: Checking APIGroup: certificates.k8s.io +Mar 7 04:07:25.160: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Mar 7 04:07:25.160: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Mar 7 04:07:25.160: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Mar 7 04:07:25.160: INFO: Checking APIGroup: networking.k8s.io +Mar 7 04:07:25.161: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Mar 7 04:07:25.161: INFO: Versions found [{networking.k8s.io/v1 v1}] +Mar 7 04:07:25.161: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Mar 7 04:07:25.161: INFO: Checking APIGroup: policy +Mar 7 04:07:25.161: INFO: PreferredVersion.GroupVersion: policy/v1 +Mar 7 04:07:25.161: INFO: Versions found [{policy/v1 v1}] +Mar 7 04:07:25.162: INFO: policy/v1 matches policy/v1 +Mar 7 04:07:25.162: INFO: Checking APIGroup: rbac.authorization.k8s.io +Mar 7 04:07:25.162: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Mar 7 04:07:25.162: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Mar 7 04:07:25.162: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Mar 7 04:07:25.162: INFO: Checking APIGroup: storage.k8s.io +Mar 7 04:07:25.162: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Mar 7 04:07:25.162: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Mar 7 04:07:25.162: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Mar 7 04:07:25.162: INFO: Checking APIGroup: admissionregistration.k8s.io +Mar 7 04:07:25.163: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Mar 7 04:07:25.163: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Mar 7 04:07:25.163: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Mar 7 04:07:25.163: INFO: Checking APIGroup: apiextensions.k8s.io +Mar 7 04:07:25.163: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Mar 7 04:07:25.163: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Mar 7 04:07:25.163: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Mar 7 04:07:25.163: INFO: Checking APIGroup: scheduling.k8s.io +Mar 7 04:07:25.164: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Mar 7 04:07:25.164: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Mar 7 04:07:25.164: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Mar 7 04:07:25.164: INFO: Checking APIGroup: coordination.k8s.io +Mar 7 04:07:25.165: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Mar 7 04:07:25.165: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Mar 7 04:07:25.165: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Mar 7 04:07:25.165: INFO: Checking APIGroup: node.k8s.io +Mar 7 04:07:25.165: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Mar 7 04:07:25.165: INFO: Versions found [{node.k8s.io/v1 v1}] +Mar 7 04:07:25.165: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Mar 7 04:07:25.165: INFO: Checking APIGroup: discovery.k8s.io +Mar 7 04:07:25.166: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Mar 7 04:07:25.166: INFO: Versions found [{discovery.k8s.io/v1 v1}] +Mar 7 04:07:25.166: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Mar 7 04:07:25.166: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Mar 7 04:07:25.166: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 +Mar 7 04:07:25.166: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] +Mar 7 04:07:25.166: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 +Mar 7 04:07:25.166: INFO: Checking APIGroup: crd.projectcalico.org +Mar 7 04:07:25.167: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 +Mar 7 04:07:25.167: INFO: Versions found [{crd.projectcalico.org/v1 v1}] +Mar 7 04:07:25.167: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 +Mar 7 04:07:25.167: INFO: Checking APIGroup: dex.coreos.com +Mar 7 04:07:25.167: INFO: PreferredVersion.GroupVersion: dex.coreos.com/v1 +Mar 7 04:07:25.167: INFO: Versions found [{dex.coreos.com/v1 v1}] +Mar 7 04:07:25.167: INFO: dex.coreos.com/v1 matches dex.coreos.com/v1 +Mar 7 04:07:25.167: INFO: Checking APIGroup: monitoring.coreos.com +Mar 7 04:07:25.168: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 +Mar 7 04:07:25.168: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] +Mar 7 04:07:25.168: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 +Mar 7 04:07:25.168: INFO: Checking APIGroup: metalk8s.scality.com +Mar 7 04:07:25.168: INFO: PreferredVersion.GroupVersion: metalk8s.scality.com/v1alpha1 +Mar 7 04:07:25.168: INFO: Versions found [{metalk8s.scality.com/v1alpha1 v1alpha1}] +Mar 7 04:07:25.168: INFO: metalk8s.scality.com/v1alpha1 matches metalk8s.scality.com/v1alpha1 +Mar 7 04:07:25.168: INFO: Checking APIGroup: storage.metalk8s.scality.com +Mar 7 04:07:25.169: INFO: PreferredVersion.GroupVersion: storage.metalk8s.scality.com/v1alpha1 +Mar 7 04:07:25.169: INFO: Versions found [{storage.metalk8s.scality.com/v1alpha1 v1alpha1}] +Mar 7 04:07:25.169: INFO: storage.metalk8s.scality.com/v1alpha1 matches storage.metalk8s.scality.com/v1alpha1 +Mar 7 04:07:25.169: INFO: Checking APIGroup: custom.metrics.k8s.io +Mar 7 04:07:25.169: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 +Mar 7 04:07:25.169: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] +Mar 7 04:07:25.169: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 +Mar 7 04:07:25.169: INFO: Checking APIGroup: metrics.k8s.io +Mar 7 04:07:25.170: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Mar 7 04:07:25.170: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Mar 7 04:07:25.170: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery + test/e2e/framework/framework.go:187 +Mar 7 04:07:25.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-7910" for this suite. 03/07/23 04:07:25.173 +{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","completed":357,"skipped":6625,"failed":0} +------------------------------ +• [0.472 seconds] +[sig-api-machinery] Discovery +test/e2e/apimachinery/framework.go:23 + should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Discovery + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:07:24.706 + Mar 7 04:07:24.706: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename discovery 03/07/23 04:07:24.707 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:24.723 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:24.725 + [BeforeEach] [sig-api-machinery] Discovery + test/e2e/apimachinery/discovery.go:43 + STEP: Setting up server cert 03/07/23 04:07:24.727 + [It] should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 + Mar 7 04:07:25.155: INFO: Checking APIGroup: apiregistration.k8s.io + Mar 7 04:07:25.156: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 + Mar 7 04:07:25.156: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] + Mar 7 04:07:25.156: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 + Mar 7 04:07:25.156: INFO: Checking APIGroup: apps + Mar 7 04:07:25.156: INFO: PreferredVersion.GroupVersion: apps/v1 + Mar 7 04:07:25.156: INFO: Versions found [{apps/v1 v1}] + Mar 7 04:07:25.156: INFO: apps/v1 matches apps/v1 + Mar 7 04:07:25.156: INFO: Checking APIGroup: events.k8s.io + Mar 7 04:07:25.157: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 + Mar 7 04:07:25.157: INFO: Versions found [{events.k8s.io/v1 v1}] + Mar 7 04:07:25.157: INFO: events.k8s.io/v1 matches events.k8s.io/v1 + Mar 7 04:07:25.157: INFO: Checking APIGroup: authentication.k8s.io + Mar 7 04:07:25.158: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 + Mar 7 04:07:25.158: INFO: Versions found [{authentication.k8s.io/v1 v1}] + Mar 7 04:07:25.158: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 + Mar 7 04:07:25.158: INFO: Checking APIGroup: authorization.k8s.io + Mar 7 04:07:25.158: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 + Mar 7 04:07:25.158: INFO: Versions found [{authorization.k8s.io/v1 v1}] + Mar 7 04:07:25.158: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 + Mar 7 04:07:25.158: INFO: Checking APIGroup: autoscaling + Mar 7 04:07:25.159: INFO: PreferredVersion.GroupVersion: autoscaling/v2 + Mar 7 04:07:25.159: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta2 v2beta2}] + Mar 7 04:07:25.159: INFO: autoscaling/v2 matches autoscaling/v2 + Mar 7 04:07:25.159: INFO: Checking APIGroup: batch + Mar 7 04:07:25.160: INFO: PreferredVersion.GroupVersion: batch/v1 + Mar 7 04:07:25.160: INFO: Versions found [{batch/v1 v1}] + Mar 7 04:07:25.160: INFO: batch/v1 matches batch/v1 + Mar 7 04:07:25.160: INFO: Checking APIGroup: certificates.k8s.io + Mar 7 04:07:25.160: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 + Mar 7 04:07:25.160: INFO: Versions found [{certificates.k8s.io/v1 v1}] + Mar 7 04:07:25.160: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 + Mar 7 04:07:25.160: INFO: Checking APIGroup: networking.k8s.io + Mar 7 04:07:25.161: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 + Mar 7 04:07:25.161: INFO: Versions found [{networking.k8s.io/v1 v1}] + Mar 7 04:07:25.161: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 + Mar 7 04:07:25.161: INFO: Checking APIGroup: policy + Mar 7 04:07:25.161: INFO: PreferredVersion.GroupVersion: policy/v1 + Mar 7 04:07:25.161: INFO: Versions found [{policy/v1 v1}] + Mar 7 04:07:25.162: INFO: policy/v1 matches policy/v1 + Mar 7 04:07:25.162: INFO: Checking APIGroup: rbac.authorization.k8s.io + Mar 7 04:07:25.162: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 + Mar 7 04:07:25.162: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] + Mar 7 04:07:25.162: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 + Mar 7 04:07:25.162: INFO: Checking APIGroup: storage.k8s.io + Mar 7 04:07:25.162: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 + Mar 7 04:07:25.162: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] + Mar 7 04:07:25.162: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 + Mar 7 04:07:25.162: INFO: Checking APIGroup: admissionregistration.k8s.io + Mar 7 04:07:25.163: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 + Mar 7 04:07:25.163: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] + Mar 7 04:07:25.163: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 + Mar 7 04:07:25.163: INFO: Checking APIGroup: apiextensions.k8s.io + Mar 7 04:07:25.163: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 + Mar 7 04:07:25.163: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] + Mar 7 04:07:25.163: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 + Mar 7 04:07:25.163: INFO: Checking APIGroup: scheduling.k8s.io + Mar 7 04:07:25.164: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 + Mar 7 04:07:25.164: INFO: Versions found [{scheduling.k8s.io/v1 v1}] + Mar 7 04:07:25.164: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 + Mar 7 04:07:25.164: INFO: Checking APIGroup: coordination.k8s.io + Mar 7 04:07:25.165: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 + Mar 7 04:07:25.165: INFO: Versions found [{coordination.k8s.io/v1 v1}] + Mar 7 04:07:25.165: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 + Mar 7 04:07:25.165: INFO: Checking APIGroup: node.k8s.io + Mar 7 04:07:25.165: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 + Mar 7 04:07:25.165: INFO: Versions found [{node.k8s.io/v1 v1}] + Mar 7 04:07:25.165: INFO: node.k8s.io/v1 matches node.k8s.io/v1 + Mar 7 04:07:25.165: INFO: Checking APIGroup: discovery.k8s.io + Mar 7 04:07:25.166: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 + Mar 7 04:07:25.166: INFO: Versions found [{discovery.k8s.io/v1 v1}] + Mar 7 04:07:25.166: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 + Mar 7 04:07:25.166: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io + Mar 7 04:07:25.166: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 + Mar 7 04:07:25.166: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] + Mar 7 04:07:25.166: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 + Mar 7 04:07:25.166: INFO: Checking APIGroup: crd.projectcalico.org + Mar 7 04:07:25.167: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 + Mar 7 04:07:25.167: INFO: Versions found [{crd.projectcalico.org/v1 v1}] + Mar 7 04:07:25.167: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 + Mar 7 04:07:25.167: INFO: Checking APIGroup: dex.coreos.com + Mar 7 04:07:25.167: INFO: PreferredVersion.GroupVersion: dex.coreos.com/v1 + Mar 7 04:07:25.167: INFO: Versions found [{dex.coreos.com/v1 v1}] + Mar 7 04:07:25.167: INFO: dex.coreos.com/v1 matches dex.coreos.com/v1 + Mar 7 04:07:25.167: INFO: Checking APIGroup: monitoring.coreos.com + Mar 7 04:07:25.168: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 + Mar 7 04:07:25.168: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] + Mar 7 04:07:25.168: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 + Mar 7 04:07:25.168: INFO: Checking APIGroup: metalk8s.scality.com + Mar 7 04:07:25.168: INFO: PreferredVersion.GroupVersion: metalk8s.scality.com/v1alpha1 + Mar 7 04:07:25.168: INFO: Versions found [{metalk8s.scality.com/v1alpha1 v1alpha1}] + Mar 7 04:07:25.168: INFO: metalk8s.scality.com/v1alpha1 matches metalk8s.scality.com/v1alpha1 + Mar 7 04:07:25.168: INFO: Checking APIGroup: storage.metalk8s.scality.com + Mar 7 04:07:25.169: INFO: PreferredVersion.GroupVersion: storage.metalk8s.scality.com/v1alpha1 + Mar 7 04:07:25.169: INFO: Versions found [{storage.metalk8s.scality.com/v1alpha1 v1alpha1}] + Mar 7 04:07:25.169: INFO: storage.metalk8s.scality.com/v1alpha1 matches storage.metalk8s.scality.com/v1alpha1 + Mar 7 04:07:25.169: INFO: Checking APIGroup: custom.metrics.k8s.io + Mar 7 04:07:25.169: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 + Mar 7 04:07:25.169: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] + Mar 7 04:07:25.169: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 + Mar 7 04:07:25.169: INFO: Checking APIGroup: metrics.k8s.io + Mar 7 04:07:25.170: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 + Mar 7 04:07:25.170: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] + Mar 7 04:07:25.170: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 + [AfterEach] [sig-api-machinery] Discovery + test/e2e/framework/framework.go:187 + Mar 7 04:07:25.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "discovery-7910" for this suite. 03/07/23 04:07:25.173 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 +[BeforeEach] [sig-node] Pods Extended + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:07:25.179 +Mar 7 04:07:25.179: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename pods 03/07/23 04:07:25.18 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:25.196 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:25.198 +[BeforeEach] Pods Set QOS Class + test/e2e/node/pods.go:152 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 +STEP: creating the pod 03/07/23 04:07:25.199 +STEP: submitting the pod to kubernetes 03/07/23 04:07:25.199 +STEP: verifying QOS class is set on the pod 03/07/23 04:07:25.206 +[AfterEach] [sig-node] Pods Extended + test/e2e/framework/framework.go:187 +Mar 7 04:07:25.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-943" for this suite. 03/07/23 04:07:25.212 +{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","completed":358,"skipped":6642,"failed":0} +------------------------------ +• [0.040 seconds] +[sig-node] Pods Extended +test/e2e/node/framework.go:23 + Pods Set QOS Class + test/e2e/node/pods.go:150 + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods Extended + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:07:25.179 + Mar 7 04:07:25.179: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename pods 03/07/23 04:07:25.18 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:25.196 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:25.198 + [BeforeEach] Pods Set QOS Class + test/e2e/node/pods.go:152 + [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 + STEP: creating the pod 03/07/23 04:07:25.199 + STEP: submitting the pod to kubernetes 03/07/23 04:07:25.199 + STEP: verifying QOS class is set on the pod 03/07/23 04:07:25.206 + [AfterEach] [sig-node] Pods Extended + test/e2e/framework/framework.go:187 + Mar 7 04:07:25.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "pods-943" for this suite. 03/07/23 04:07:25.212 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:07:25.221 +Mar 7 04:07:25.221: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename replicaset 03/07/23 04:07:25.222 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:25.237 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:25.239 +[It] should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 +STEP: Create a ReplicaSet 03/07/23 04:07:25.241 +STEP: Verify that the required pods have come up 03/07/23 04:07:25.245 +Mar 7 04:07:25.249: INFO: Pod name sample-pod: Found 0 pods out of 3 +Mar 7 04:07:30.254: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running 03/07/23 04:07:30.254 +Mar 7 04:07:30.256: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets 03/07/23 04:07:30.256 +STEP: DeleteCollection of the ReplicaSets 03/07/23 04:07:30.261 +STEP: After DeleteCollection verify that ReplicaSets have been deleted 03/07/23 04:07:30.267 +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 +Mar 7 04:07:30.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-9385" for this suite. 03/07/23 04:07:30.279 +{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","completed":359,"skipped":6679,"failed":0} +------------------------------ +• [SLOW TEST] [5.068 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:07:25.221 + Mar 7 04:07:25.221: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename replicaset 03/07/23 04:07:25.222 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:25.237 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:25.239 + [It] should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 + STEP: Create a ReplicaSet 03/07/23 04:07:25.241 + STEP: Verify that the required pods have come up 03/07/23 04:07:25.245 + Mar 7 04:07:25.249: INFO: Pod name sample-pod: Found 0 pods out of 3 + Mar 7 04:07:30.254: INFO: Pod name sample-pod: Found 3 pods out of 3 + STEP: ensuring each pod is running 03/07/23 04:07:30.254 + Mar 7 04:07:30.256: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} + STEP: Listing all ReplicaSets 03/07/23 04:07:30.256 + STEP: DeleteCollection of the ReplicaSets 03/07/23 04:07:30.261 + STEP: After DeleteCollection verify that ReplicaSets have been deleted 03/07/23 04:07:30.267 + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/framework.go:187 + Mar 7 04:07:30.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "replicaset-9385" for this suite. 03/07/23 04:07:30.279 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:290 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:07:30.29 +Mar 7 04:07:30.290: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename webhook 03/07/23 04:07:30.29 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:30.332 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:30.337 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 +STEP: Setting up server cert 03/07/23 04:07:30.36 +STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 04:07:30.608 +STEP: Deploying the webhook pod 03/07/23 04:07:30.615 +STEP: Wait for the deployment to be ready 03/07/23 04:07:30.631 +Mar 7 04:07:30.643: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 03/07/23 04:07:32.653 +STEP: Verifying the service has paired with the endpoint 03/07/23 04:07:32.667 +Mar 7 04:07:33.668: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:290 +Mar 7 04:07:33.674: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3790-crds.webhook.example.com via the AdmissionRegistration API 03/07/23 04:07:34.185 +STEP: Creating a custom resource that should be mutated by the webhook 03/07/23 04:07:34.202 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 +Mar 7 04:07:36.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8858" for this suite. 03/07/23 04:07:36.802 +STEP: Destroying namespace "webhook-8858-markers" for this suite. 03/07/23 04:07:36.809 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","completed":360,"skipped":6689,"failed":0} +------------------------------ +• [SLOW TEST] [6.599 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:290 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:07:30.29 + Mar 7 04:07:30.290: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename webhook 03/07/23 04:07:30.29 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:30.332 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:30.337 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:89 + STEP: Setting up server cert 03/07/23 04:07:30.36 + STEP: Create role binding to let webhook read extension-apiserver-authentication 03/07/23 04:07:30.608 + STEP: Deploying the webhook pod 03/07/23 04:07:30.615 + STEP: Wait for the deployment to be ready 03/07/23 04:07:30.631 + Mar 7 04:07:30.643: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 03/07/23 04:07:32.653 + STEP: Verifying the service has paired with the endpoint 03/07/23 04:07:32.667 + Mar 7 04:07:33.668: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:290 + Mar 7 04:07:33.674: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3790-crds.webhook.example.com via the AdmissionRegistration API 03/07/23 04:07:34.185 + STEP: Creating a custom resource that should be mutated by the webhook 03/07/23 04:07:34.202 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/framework.go:187 + Mar 7 04:07:36.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "webhook-8858" for this suite. 03/07/23 04:07:36.802 + STEP: Destroying namespace "webhook-8858-markers" for this suite. 03/07/23 04:07:36.809 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:104 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:45 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:07:36.888 +Mar 7 04:07:36.889: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename secrets 03/07/23 04:07:36.889 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:36.92 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:36.929 +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:45 +STEP: Creating secret with name secret-test-68dde0e9-da6d-4189-80ca-81f9b27f0953 03/07/23 04:07:36.932 +STEP: Creating a pod to test consume secrets 03/07/23 04:07:36.955 +Mar 7 04:07:36.981: INFO: Waiting up to 5m0s for pod "pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871" in namespace "secrets-2390" to be "Succeeded or Failed" +Mar 7 04:07:36.986: INFO: Pod "pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871": Phase="Pending", Reason="", readiness=false. Elapsed: 4.87257ms +Mar 7 04:07:38.989: INFO: Pod "pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007735742s +Mar 7 04:07:40.989: INFO: Pod "pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00860577s +STEP: Saw pod success 03/07/23 04:07:40.989 +Mar 7 04:07:40.990: INFO: Pod "pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871" satisfied condition "Succeeded or Failed" +Mar 7 04:07:40.991: INFO: Trying to get logs from node node-2 pod pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871 container secret-env-test: +STEP: delete the pod 03/07/23 04:07:40.997 +Mar 7 04:07:41.007: INFO: Waiting for pod pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871 to disappear +Mar 7 04:07:41.009: INFO: Pod pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871 no longer exists +[AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 +Mar 7 04:07:41.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2390" for this suite. 03/07/23 04:07:41.012 +{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","completed":361,"skipped":6691,"failed":0} +------------------------------ +• [4.127 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:45 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:07:36.888 + Mar 7 04:07:36.889: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename secrets 03/07/23 04:07:36.889 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:36.92 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:36.929 + [It] should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:45 + STEP: Creating secret with name secret-test-68dde0e9-da6d-4189-80ca-81f9b27f0953 03/07/23 04:07:36.932 + STEP: Creating a pod to test consume secrets 03/07/23 04:07:36.955 + Mar 7 04:07:36.981: INFO: Waiting up to 5m0s for pod "pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871" in namespace "secrets-2390" to be "Succeeded or Failed" + Mar 7 04:07:36.986: INFO: Pod "pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871": Phase="Pending", Reason="", readiness=false. Elapsed: 4.87257ms + Mar 7 04:07:38.989: INFO: Pod "pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007735742s + Mar 7 04:07:40.989: INFO: Pod "pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00860577s + STEP: Saw pod success 03/07/23 04:07:40.989 + Mar 7 04:07:40.990: INFO: Pod "pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871" satisfied condition "Succeeded or Failed" + Mar 7 04:07:40.991: INFO: Trying to get logs from node node-2 pod pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871 container secret-env-test: + STEP: delete the pod 03/07/23 04:07:40.997 + Mar 7 04:07:41.007: INFO: Waiting for pod pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871 to disappear + Mar 7 04:07:41.009: INFO: Pod pod-secrets-937a26fa-e6ff-4058-b3ce-c8753ebfc871 no longer exists + [AfterEach] [sig-node] Secrets + test/e2e/framework/framework.go:187 + Mar 7 04:07:41.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "secrets-2390" for this suite. 03/07/23 04:07:41.012 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:51 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:186 +STEP: Creating a kubernetes client 03/07/23 04:07:41.016 +Mar 7 04:07:41.016: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 +STEP: Building a namespace api object, basename container-runtime 03/07/23 04:07:41.017 +STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:41.038 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:41.04 +[It] should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:51 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' 03/07/23 04:07:41.049 +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' 03/07/23 04:07:57.126 +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition 03/07/23 04:07:57.129 +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' 03/07/23 04:07:57.133 +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] 03/07/23 04:07:57.133 +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' 03/07/23 04:07:57.153 +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' 03/07/23 04:08:00.168 +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition 03/07/23 04:08:02.177 +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' 03/07/23 04:08:02.181 +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] 03/07/23 04:08:02.181 +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' 03/07/23 04:08:02.195 +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' 03/07/23 04:08:03.203 +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition 03/07/23 04:08:06.216 +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' 03/07/23 04:08:06.22 +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] 03/07/23 04:08:06.22 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 +Mar 7 04:08:06.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-5015" for this suite. 03/07/23 04:08:06.239 +{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","completed":362,"skipped":6697,"failed":0} +------------------------------ +• [SLOW TEST] [25.228 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:43 + when starting a container that exits + test/e2e/common/node/runtime.go:44 + should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:51 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:186 + STEP: Creating a kubernetes client 03/07/23 04:07:41.016 + Mar 7 04:07:41.016: INFO: >>> kubeConfig: /tmp/kubeconfig-1093879902 + STEP: Building a namespace api object, basename container-runtime 03/07/23 04:07:41.017 + STEP: Waiting for a default service account to be provisioned in namespace 03/07/23 04:07:41.038 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/07/23 04:07:41.04 + [It] should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:51 + STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' 03/07/23 04:07:41.049 + STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' 03/07/23 04:07:57.126 + STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition 03/07/23 04:07:57.129 + STEP: Container 'terminate-cmd-rpa': should get the expected 'State' 03/07/23 04:07:57.133 + STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] 03/07/23 04:07:57.133 + STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' 03/07/23 04:07:57.153 + STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' 03/07/23 04:08:00.168 + STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition 03/07/23 04:08:02.177 + STEP: Container 'terminate-cmd-rpof': should get the expected 'State' 03/07/23 04:08:02.181 + STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] 03/07/23 04:08:02.181 + STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' 03/07/23 04:08:02.195 + STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' 03/07/23 04:08:03.203 + STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition 03/07/23 04:08:06.216 + STEP: Container 'terminate-cmd-rpn': should get the expected 'State' 03/07/23 04:08:06.22 + STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] 03/07/23 04:08:06.22 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/framework.go:187 + Mar 7 04:08:06.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + STEP: Destroying namespace "container-runtime-5015" for this suite. 03/07/23 04:08:06.239 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[SynchronizedAfterSuite] +test/e2e/e2e.go:87 +[SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:87 +{"msg":"Test Suite completed","completed":362,"skipped":6704,"failed":0} +Mar 7 04:08:06.245: INFO: Running AfterSuite actions on all nodes +Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func20.2 +Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func10.2 +Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 +Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 +Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 +Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 +Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 +[SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:87 +Mar 7 04:08:06.245: INFO: Running AfterSuite actions on node 1 +Mar 7 04:08:06.245: INFO: Skipping dumping logs from cluster +------------------------------ +[SynchronizedAfterSuite] PASSED [0.000 seconds] +[SynchronizedAfterSuite] +test/e2e/e2e.go:87 + + Begin Captured GinkgoWriter Output >> + [SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:87 + Mar 7 04:08:06.245: INFO: Running AfterSuite actions on all nodes + Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func20.2 + Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func10.2 + Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 + Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 + Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 + Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 + Mar 7 04:08:06.245: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 + [SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:87 + Mar 7 04:08:06.245: INFO: Running AfterSuite actions on node 1 + Mar 7 04:08:06.245: INFO: Skipping dumping logs from cluster + << End Captured GinkgoWriter Output +------------------------------ +[ReportAfterSuite] Kubernetes e2e suite report +test/e2e/e2e_test.go:146 +[ReportAfterSuite] TOP-LEVEL + test/e2e/e2e_test.go:146 +------------------------------ +[ReportAfterSuite] PASSED [0.000 seconds] +[ReportAfterSuite] Kubernetes e2e suite report +test/e2e/e2e_test.go:146 + + Begin Captured GinkgoWriter Output >> + [ReportAfterSuite] TOP-LEVEL + test/e2e/e2e_test.go:146 + << End Captured GinkgoWriter Output +------------------------------ +[ReportAfterSuite] Kubernetes e2e JUnit report +test/e2e/framework/test_context.go:559 +[ReportAfterSuite] TOP-LEVEL + test/e2e/framework/test_context.go:559 +------------------------------ +[ReportAfterSuite] PASSED [0.066 seconds] +[ReportAfterSuite] Kubernetes e2e JUnit report +test/e2e/framework/test_context.go:559 + + Begin Captured GinkgoWriter Output >> + [ReportAfterSuite] TOP-LEVEL + test/e2e/framework/test_context.go:559 + << End Captured GinkgoWriter Output +------------------------------ + +Ran 362 of 7066 Specs in 6167.207 seconds +SUCCESS! -- 362 Passed | 0 Failed | 0 Pending | 6704 Skipped +PASS + +Ginkgo ran 1 suite in 1h42m47.454778187s +Test Suite Passed +You're using deprecated Ginkgo functionality: +============================================= + --noColor is deprecated, use --no-color instead + Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags + +To silence deprecations that can be silenced set the following environment variable: + ACK_GINKGO_DEPRECATIONS=2.1.6 + diff --git a/v1.25/MetalK8s/junit_01.xml b/v1.25/MetalK8s/junit_01.xml new file mode 100644 index 0000000000..6c3c64e967 --- /dev/null +++ b/v1.25/MetalK8s/junit_01.xml @@ -0,0 +1,20502 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file