Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker driver causes "ssh: handshake failed" - debian 10 #10249

Closed
h4n0sh1 opened this issue Jan 24, 2021 · 19 comments
Closed

Docker driver causes "ssh: handshake failed" - debian 10 #10249

h4n0sh1 opened this issue Jan 24, 2021 · 19 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code

Comments

@h4n0sh1
Copy link

h4n0sh1 commented Jan 24, 2021

Greetings,

Following the previous issue, i am now trying to run minikube cluster with docker driver (instead of virtualbox) and crio runtime.

It's a fresh docker install (c.f docker info in details) running inside a debian vm (kali rolling 2020), no rootless mode, all parameters are default.

Steps to reproduce the issue:

1.minikube start --vm-driver=docker --network-plugin=cni --enable-default-cni --container-runtime=crio --bootstrapper=kubeadm --alsologtostderr

Full output of failed command:

I0124 19:54:04.815214   53391 out.go:185] Setting OutFile to fd 1 ...
I0124 19:54:04.815716   53391 out.go:237] isatty.IsTerminal(1) = true
I0124 19:54:04.815871   53391 out.go:198] Setting ErrFile to fd 2...
I0124 19:54:04.815912   53391 out.go:237] isatty.IsTerminal(2) = true
I0124 19:54:04.816170   53391 root.go:279] Updating PATH: /home/kcold/.minikube/bin
W0124 19:54:04.816449   53391 root.go:254] Error reading config file at /home/kcold/.minikube/config/config.json: open /home/kcold/.minikube/config/config.json: no such file or directory
I0124 19:54:04.817141   53391 out.go:192] Setting JSON to false
I0124 19:54:04.876054   53391 start.go:103] hostinfo: {"hostname":"dev","uptime":18826,"bootTime":1611495618,"procs":279,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"kali-rolling","kernelVersion":"5.9.0-kali4-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"557ed020-b749-4d07-bc5f-664c6d9e06cd"}
I0124 19:54:04.876688   53391 start.go:113] virtualization: kvm host
I0124 19:54:04.879718   53391 out.go:110] 😄  minikube v1.15.1 on Debian kali-rolling
😄  minikube v1.15.1 on Debian kali-rolling
I0124 19:54:04.880038   53391 notify.go:126] Checking for updates...
I0124 19:54:04.881484   53391 driver.go:302] Setting default libvirt URI to qemu:///system
I0124 19:54:04.969386   53391 docker.go:117] docker version: linux-20.10.2
I0124 19:54:04.969746   53391 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0124 19:54:05.281138   53391 lock.go:36] WriteFile acquiring /home/kcold/.minikube/last_update_check: {Name:mkcca8302b5e25ec95a38225997ff2756319dd5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 19:54:05.323054   53391 out.go:110] 🎉  minikube 1.17.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.17.0
🎉  minikube 1.17.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.17.0
I0124 19:54:05.323684   53391 out.go:110] 💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

I0124 19:54:05.422502   53391 info.go:253] docker info: {ID:JSBA:QUMU:MUMK:FC4H:7N7W:2ILV:WEUH:FLBK:RV2U:SJF3:CEFC:I242 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-01-24 19:54:05.026155571 +0100 CET LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.9.0-kali4-amd64 OperatingSystem:Kali GNU/Linux Rolling OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:4100112384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:dev Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio weight support WARNING: No blkio weight_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
I0124 19:54:05.422693   53391 docker.go:147] overlay module found
I0124 19:54:05.423710   53391 out.go:110] ✨  Using the docker driver based on user configuration
✨  Using the docker driver based on user configuration
I0124 19:54:05.423813   53391 start.go:272] selected driver: docker
I0124 19:54:05.423830   53391 start.go:686] validating driver "docker" against <nil>
I0124 19:54:05.423874   53391 start.go:697] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0124 19:54:05.423997   53391 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0124 19:54:05.872062   53391 info.go:253] docker info: {ID:JSBA:QUMU:MUMK:FC4H:7N7W:2ILV:WEUH:FLBK:RV2U:SJF3:CEFC:I242 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-01-24 19:54:05.47924293 +0100 CET LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.9.0-kali4-amd64 OperatingSystem:Kali GNU/Linux Rolling OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:4100112384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:dev Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio weight support WARNING: No blkio weight_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}}
I0124 19:54:05.872533   53391 start_flags.go:233] no existing cluster config was found, will generate one from the flags 
I0124 19:54:05.873019   53391 start_flags.go:251] Using suggested 2200MB memory alloc based on sys=3910MB, container=3910MB
E0124 19:54:05.873210   53391 start_flags.go:285] Found deprecated --enable-default-cni flag, setting --cni=bridge
W0124 19:54:05.873552   53391 out.go:146] ❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
I0124 19:54:05.873913   53391 start_flags.go:641] Wait components to verify : map[apiserver:true system_pods:true]
I0124 19:54:05.874121   53391 cni.go:74] Creating CNI manager for "bridge"
I0124 19:54:05.874355   53391 start_flags.go:359] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0124 19:54:05.874605   53391 start_flags.go:364] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]}
I0124 19:54:05.877949   53391 out.go:110] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0124 19:54:05.934546   53391 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e in local docker daemon, skipping pull
I0124 19:54:05.934741   53391 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e exists in daemon, skipping pull
I0124 19:54:05.934771   53391 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime crio
I0124 19:54:06.115344   53391 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v6-v1.19.4-cri-o-overlay-amd64.tar.lz4
I0124 19:54:06.115610   53391 cache.go:54] Caching tarball of preloaded images
I0124 19:54:06.115766   53391 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime crio
I0124 19:54:06.370171   53391 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v6-v1.19.4-cri-o-overlay-amd64.tar.lz4
I0124 19:54:06.372081   53391 out.go:110] 💾  Downloading Kubernetes v1.19.4 preload ...
💾  Downloading Kubernetes v1.19.4 preload ...
I0124 19:54:06.372360   53391 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v6-v1.19.4-cri-o-overlay-amd64.tar.lz4 -> /home/kcold/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-cri-o-overlay-amd64.tar.lz4
    > preloaded-images-k8s-v6-v1.19.4-cri-o-overlay-amd64.tar.lz4: 551.20 MiB /
I0124 20:03:16.690401   53391 preload.go:160] saving checksum for preloaded-images-k8s-v6-v1.19.4-cri-o-overlay-amd64.tar.lz4 ...
I0124 20:03:17.166395   53391 preload.go:177] verifying checksumm of /home/kcold/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-cri-o-overlay-amd64.tar.lz4 ...
I0124 20:03:20.834401   53391 cache.go:57] Finished verifying existence of preloaded tar for  v1.19.4 on crio
I0124 20:03:20.834913   53391 profile.go:150] Saving config to /home/kcold/.minikube/profiles/minikube/config.json ...
I0124 20:03:20.835007   53391 lock.go:36] WriteFile acquiring /home/kcold/.minikube/profiles/minikube/config.json: {Name:mk5a0a4e6c424ad3d5b87951132ece86a93e62c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 20:03:20.835202   53391 cache.go:184] Successfully downloaded all kic artifacts
I0124 20:03:20.835296   53391 start.go:314] acquiring machines lock for minikube: {Name:mk00c3e1ba72e9c94da5ee6a77fe303dff121c14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0124 20:03:20.835370   53391 start.go:318] acquired machines lock for "minikube" in 52.144µs
I0124 20:03:20.835401   53391 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]} &{Name: IP: Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}
I0124 20:03:20.835468   53391 start.go:127] createHost starting for "" (driver="docker")
I0124 20:03:20.837090   53391 out.go:110] 🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
I0124 20:03:20.837607   53391 start.go:164] libmachine.API.Create for "minikube" (driver="docker")
I0124 20:03:20.837720   53391 client.go:165] LocalClient.Create starting
I0124 20:03:20.837930   53391 main.go:119] libmachine: Creating CA: /home/kcold/.minikube/certs/ca.pem
I0124 20:03:21.065971   53391 main.go:119] libmachine: Creating client certificate: /home/kcold/.minikube/certs/cert.pem
I0124 20:03:21.274753   53391 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
W0124 20:03:21.336997   53391 cli_runner.go:148] docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" returned with exit code 1
I0124 20:03:21.337329   53391 network_create.go:178] running [docker network inspect minikube] to gather additional debugging logs...
I0124 20:03:21.337489   53391 cli_runner.go:110] Run: docker network inspect minikube
W0124 20:03:21.399303   53391 cli_runner.go:148] docker network inspect minikube returned with exit code 1
I0124 20:03:21.399558   53391 network_create.go:181] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0124 20:03:21.399642   53391 network_create.go:183] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I0124 20:03:21.399936   53391 cli_runner.go:110] Run: docker network inspect bridge --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
I0124 20:03:21.453245   53391 network_create.go:96] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ...
I0124 20:03:21.453625   53391 cli_runner.go:110] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube -o com.docker.network.driver.mtu=1500
I0124 20:03:21.776102   53391 kic.go:93] calculated static IP "192.168.49.2" for the "minikube" container
I0124 20:03:21.776219   53391 cli_runner.go:110] Run: docker ps -a --format {{.Names}}
I0124 20:03:21.927269   53391 cli_runner.go:110] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0124 20:03:22.025031   53391 oci.go:102] Successfully created a docker volume minikube
I0124 20:03:22.025300   53391 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -d /var/lib
I0124 20:03:25.479097   53391 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -d /var/lib: (3.453626323s)
I0124 20:03:25.479465   53391 oci.go:106] Successfully prepared a docker volume minikube
W0124 20:03:25.479694   53391 oci.go:153] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0124 20:03:25.479832   53391 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime crio
I0124 20:03:25.479901   53391 preload.go:105] Found local preload: /home/kcold/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-cri-o-overlay-amd64.tar.lz4
I0124 20:03:25.479918   53391 kic.go:148] Starting extracting preloaded images to volume ...
I0124 20:03:25.479991   53391 cli_runner.go:110] Run: docker info --format "'{{json .SecurityOptions}}'"
I0124 20:03:25.479999   53391 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/kcold/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -I lz4 -xvf /preloaded.tar -C /extractDir
I0124 20:03:25.903026   53391 cli_runner.go:110] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e
I0124 20:03:27.366691   53391 cli_runner.go:154] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e: (1.463545346s)
I0124 20:03:27.366946   53391 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Running}}
I0124 20:03:27.914156   53391 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0124 20:03:28.431890   53391 cli_runner.go:110] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0124 20:03:29.077543   53391 oci.go:245] the created container "minikube" has a running status.
I0124 20:03:29.077582   53391 kic.go:179] Creating ssh key for kic: /home/kcold/.minikube/machines/minikube/id_rsa...
I0124 20:03:29.308080   53391 kic_runner.go:179] docker (temp): /home/kcold/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0124 20:03:30.414036   53391 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0124 20:03:30.686455   53391 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0124 20:03:30.686496   53391 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0124 20:04:07.310215   53391 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/kcold/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -I lz4 -xvf /preloaded.tar -C /extractDir: (41.82931245s)
I0124 20:04:07.310323   53391 kic.go:157] duration metric: took 41.830394 seconds to extract preloaded images to volume
I0124 20:04:07.310517   53391 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0124 20:04:07.568401   53391 machine.go:88] provisioning docker machine ...
I0124 20:04:07.568880   53391 ubuntu.go:166] provisioning hostname "minikube"
I0124 20:04:07.569104   53391 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0124 20:04:07.682626   53391 main.go:119] libmachine: Using SSH client type: native
I0124 20:04:07.682905   53391 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 49160 <nil> <nil>}
I0124 20:04:07.682935   53391 main.go:119] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0124 20:04:10.750346   53391 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59824->127.0.0.1:49160: read: connection reset by peer
I0124 20:04:13.820556   53391 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59828->127.0.0.1:49160: read: connection reset by peer
I0124 20:04:19.900967   53391 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59832->127.0.0.1:49160: read: connection reset by peer
I0124 20:04:22.970772   53391 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59836->127.0.0.1:49160: read: connection reset by peer
I0124 20:04:29.052402   53391 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59840->127.0.0.1:49160: read: connection reset by peer
I0124 20:04:32.123141   53391 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59844->127.0.0.1:49160: read: connection reset by peer
I0124 20:04:38.203955   53391 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59848->127.0.0.1:49160: read: connection reset by peer
I0124 20:04:41.275783   53391 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59852->127.0.0.1:49160: read: connection reset by peer
I0124 20:04:47.354779   53391 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59856->127.0.0.1:49160: read: connection reset by peer

Output of docker network inspect minikube :

[
    {
        "Name": "minikube",
        "Id": "372d58f2c05e904d8bac8c30e75d7bd438b9d609d8fb8716eb578f45cf29d1da",
        "Created": "2021-01-24T20:03:21.511238797+01:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.49.0/24",
                    "Gateway": "192.168.49.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "bb23e934929eabdbd80e86664ed6ac2d2c32fca96c8a7dc3958a8de501a27010": {
                "Name": "minikube",
                "EndpointID": "677d19d76eb67c700c918049a56ac3dc91265f4499d8903a57985d56886fb2f0",
                "MacAddress": "02:42:c0:a8:31:02",
                "IPv4Address": "192.168.49.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "--icc": "",
            "--ip-masq": "",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {
            "created_by.minikube.sigs.k8s.io": "true"
        }
    }
]

Output of journalctl -u ssh, from minikube docker container :

root@minikube:/# journalctl -u ssh
-- Logs begin at Sun 2021-01-24 19:10:10 UTC, end at Sun 2021-01-24 19:11:16 UTC. --
Jan 24 19:10:10 minikube systemd[1]: Starting OpenBSD Secure Shell server...
Jan 24 19:10:11 minikube sshd[169]: Server listening on 0.0.0.0 port 22.
Jan 24 19:10:11 minikube sshd[169]: Server listening on :: port 22.
Jan 24 19:10:11 minikube systemd[1]: Started OpenBSD Secure Shell server.
Client: Context: default Debug Mode: false Plugins: app: Docker App (Docker Inc., v0.9.1-beta3) buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)

Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 20.10.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.9.0-kali4-amd64
Operating System: Kali GNU/Linux Rolling
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.819GiB
Name: dev
ID: JSBA:QUMU:MUMK:FC4H:7N7W:2ILV:WEUH:FLBK:RV2U:SJF3:CEFC:I242
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: h4n0sh1
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: No blkio weight support
WARNING: No blkio weight_device support

@RA489
Copy link

RA489 commented Jan 25, 2021

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Jan 25, 2021
@RA489 RA489 added the co/docker-driver Issues related to kubernetes in container label Jan 25, 2021
@fdasoghe
Copy link

fdasoghe commented Jan 28, 2021

Happens same to me on Windows 10 from WSL2 using Ubuntu 18-04 and Docker driver, minikube version 1.17.0.

Command:

minikube start --kubernetes-version v1.17.13 --driver docker

fails with same final error "libmachine: Error dialing TCP: ssh...".

I resolved reverting to minikube 1.16. Worked like a charme, maybe there's some type of regression in latest version? 🤔

@x777777x
Copy link

x777777x commented Apr 23, 2021

同样的问题

libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:XXX->127.0.0.1:XXX: read: connection reset by peer

我的系统是 Centos 7.2,同样使用docker

minikube start --vm-driver=docker

我尝试降低 minikube 的版本,但问题未解决,起初以为是docker中ssh的配置问题,ssh -v 测试同时修改了 docker 内的hosts.allow,但未能解决

最后我降低了 docker 版本,问题解决

Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:51:21 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:22:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false```

@spowelljr spowelljr added long-term-support Long-term support issues that can't be fixed in code and removed triage/long-term-support labels May 19, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 17, 2021
@sharifelgamal
Copy link
Collaborator

We've made some improvements over the past few months. Would anyone like to try minikube 1.23 and see if that fixes the issues seen here?

@sharifelgamal sharifelgamal removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 15, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 13, 2022
@RA489
Copy link

RA489 commented Jan 17, 2022

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 17, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 17, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 17, 2022
@RA489
Copy link

RA489 commented May 18, 2022

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 18, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 16, 2022
@RA489
Copy link

RA489 commented Aug 16, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 14, 2022
@mihir-koyaltech
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 18, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 16, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 18, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 17, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code
Projects
None yet
Development

No branches or pull requests

9 participants