Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to connect to the server: net/http: TLS handshake timeout #14

Closed
davidjsanders opened this issue Oct 30, 2017 · 146 comments
Closed

Comments

@davidjsanders
Copy link

Hi, when I create an AKS cluster, I'm receiving a timeout on the TLS handshake. The cluster creates okay with the following commands:

az group create --name dsK8S --location westus2

az aks create \
  --resource-group dsK8S \
  --name dsK8SCluster \
  --generate-ssh-keys \
  --dns-name-prefix dasanderk8 \
  --kubernetes-version 1.8.1 \
  --agent-count 2 \
  --agent-vm-size Standard_A2

az aks get-credentials --resource-group dsK8S --name dsK8SCluster

The response from the create command is a JSON object:
{
"id": "/subscriptions/OBFUSCATED/resourcegroups/dsK8S/providers/Microsoft.ContainerService/managedClusters/dsK8SCluster",
"location": "westus2",
"name": "dsK8SCluster",
"properties": {
"accessProfiles": {
"clusterAdmin": {
"kubeConfig": "OBFUSCATED"
},
"clusterUser": {
"kubeConfig": "OBFUSCATED"
}
},
"agentPoolProfiles": [
{
"count": 2,
"dnsPrefix": null,
"fqdn": null,
"name": "agentpool1",
"osDiskSizeGb": null,
"osType": "Linux",
"ports": null,
"storageProfile": "ManagedDisks",
"vmSize": "Standard_A2",
"vnetSubnetId": null
}
],
"dnsPrefix": "dasanderk8",
"fqdn": "dasanderk8-d55f0987.hcp.westus2.azmk8s.io",
"kubernetesVersion": "1.8.1",
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": "OBFUSCATED"
}
]
}
},
"provisioningState": "Succeeded",
"servicePrincipalProfile": {
"clientId": "OBFUSCATED",
"keyVaultSecretRef": null,
"secret": null
}
},
"resourceGroup": "dsK8S",
"tags": null,
"type": "Microsoft.ContainerService/ManagedClusters"
}

I've now torn down this cluster but this has happened three times today.

Any help?

David

@davidjsanders
Copy link
Author

Update 11/03: I'm now able to create clusters successfully in uswest2; however, I'm still getting TLS handshake errors:

az aks browse --resource-group *OBFUSCATED* --name *OBFUSCATED*
Merged "*OBFUSCATED*" as current context in /tmp/tmpB988cA
Proxy running on http://127.0.0.1:8001/
Press CTRL+C to close the tunnel...
error: error upgrading connection: error dialing backend: dial tcp 10.240.0.4:10250: getsockopt: connection refused

Are we still in the realm of capacity issues or is there another underlying issue here? This should work, right?

David

@davidjsanders
Copy link
Author

Sometime I should look before I write :)

I see the problem. The proxy is trying to connect to 10.240.0.4 which is the private IP of one of the agents and won't (and shouldn't) be reachable from the Internet. I'm guessing this is the underlying issue here.

@amazaheri
Copy link

+1 originally this worked fine, I noticed the isse today when I deleted the cluster and tried to recreate it.

@amazaheri
Copy link

I get this regardless of using West US 2 or UK West:
~ amazaheri$ az aks browse -n mtcirvk8s -g mtcirvacs-rg
Merged "mtcirvk8s" as current context in /var/folders/sf/p87ql6z9271_1l7cp6hgt2d40000gp/T/tmpHZ_Er0
Proxy running on http://127.0.0.1:8001/
Press CTRL+C to close the tunnel...
error: error upgrading connection: error dialing backend: dial tcp 10.240.0.4:10250: getsockopt: connection refused

@amazaheri
Copy link

amazaheri commented Nov 6, 2017

Looks like we are good now, thanks for all the work! QQ: I cannot connect with Cabin app to my cluster using token. The app shows cluster as running but I can see any of the nodes, namespaces, etc. looks like the auth fails at some point. Thoughts?
vmware-archive/cabin#75

@eirikm
Copy link

eirikm commented Nov 9, 2017

I'm having the same problem in West US 2 at the moment:

$ kubectl get pods --all-namespaces
Unable to connect to the server: net/http: TLS handshake timeout

@nyuen
Copy link

nyuen commented Nov 9, 2017

same issue here on West US 2

@krol3
Copy link

krol3 commented Nov 9, 2017

The cluster aks is in West US 2. I have the same issue.

kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout

az aks browse --resource-group xxxx-rg --name xxxx
Merged "XXXX" as current context in /tmp/tmpx6o89zj7
Unable to connect to the server: net/http: TLS handshake timeout
Command '['kubectl', 'get', 'pods', '--namespace', 'kube-system', '--output', 'name', '--selector', 'k8s-app=kubernetes-dashboard']' returned non-zero exit status 1.

@davidjsanders
Copy link
Author

11/9: I'm still getting issues and have reverted back to unmanaged cluster using ACS and Kubernetes as the controller. Look forward to when AKS becomes a little more stable,.

@twitchax
Copy link

twitchax commented Nov 9, 2017

I am having these same issues!

@krol3
Copy link

krol3 commented Nov 10, 2017

@dsandersAzure I did the same, I created using ACS !!

@yejason
Copy link

yejason commented Nov 10, 2017

AKS still in preview, for now, it seems west us 2 is not available, but ukwest is ok. We can create aks in ukwest now.

C:\Users\jason>az group create --name akss --location ukwest
{
  "id": "/subscriptions/xxxxxxx-222b-49c3-xxxx-xxxxx1e29a7b15/resourceGroups/akss",
  "location": "ukwest",
  "managedBy": null,
  "name": "akss",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null
}

C:\Users\jason>az aks create --resource-group akss --name myK8sCluster --agent-count 1 --generate-ssh-keys
{| Finished ..
  "id": "/subscriptions/xxxxxxxx-222b-49c3-xxxx-0361e29axxxx/resourcegroups/akss/providers/Microsoft.ContainerService/managedClusters/myK8sCluster",
  "location": "ukwest",
  "name": "myK8sCluster",
  "properties": {
    "accessProfiles": {
      "clusterAdmin": {
        "kubeConfig": "YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIGNlcnRpZmljYXRlLWF1dGhvcml0eS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VWNGVrTkRRWEVyWjBGM1NVSkJaMGxSWlhVMGVXRnBOekp3TlhadmNsUjRha2hMTldReGVrRk9RbWRyY1docmFVYzVkekJDUVZGelJrRkVRVTRLVFZGemQwTlJXVVJXVVZGRVJYZEthbGxVUVdWR2R6QjRUbnBGZUUxVVFYZE5WRlV4VFdwS1lVWjNNSGhQVkVWNFRWUkJkMDFVVlRGTmFrcGhUVUV3ZUFwRGVrRktRbWRPVmtKQlRWUkJiVTVvVFVsSlEwbHFRVTVDWjJ0eGFHdHBSemwzTUVKQlVVVkdRVUZQUTBGbk9FRk5TVWxEUTJkTFEwRm5SVUZ6TlRCRENsaGFNSEJCZWtJdlYxWnRjR1ZZTkhwaFRtZzVXRFJIVjIxWWFHTnpaelIyZVRWVGQxaDNVVTB2U1dkMWRGbGFVRzFUTjFCelVUUXJZazluWkZCWGVXSUtaREp6YWxSclJsVXZPRzVMYzJzM0sxcHhPRmxWTURFMFpVWkJXamx2UlRWNUsyRmhLMlZ

@eivim
Copy link

eivim commented Nov 10, 2017

I believe capacity issues in ukwest is ongoing, hoping AKS will expand to other locations in Europe soon. Had a 1.7.7 cluster in ukwest that broke a couple of days ago. Attempted to recreate today, but it is still in a bad state.

$ kubectl get pods -n kube-system
NAME                                    READY     STATUS             RESTARTS   AGE
heapster-b5ff6c4dd-dkkll                2/2       Running            0          46m
kube-dns-v20-6c8f7f988b-cb4cg           3/3       Running            0          46m
kube-dns-v20-6c8f7f988b-ztn5r           3/3       Running            0          46m
kube-proxy-thz9p                        1/1       Running            0          46m
kube-svc-redirect-qhwz6                 0/1       CrashLoopBackOff   13         46m
kubernetes-dashboard-7f7d9489fc-d9x7d   0/1       CrashLoopBackOff   12         46m
tunnelfront-xzjq8                       0/1       CrashLoopBackOff   13         46m

$ kubectl logs kube-svc-redirect-qhwz6 -n kube-system
Error from server: Get https://aks-agentpool1-28161470-0:10250/containerLogs/kube-system/kube-svc-redirect-qhwz6/redirector: dial tcp 10.240.0.4:10250: getsockopt: connection refused

@qmfrederik
Copy link

So, provisioning in westuk gives me a cluster with crashing pods; provisioning in westus2 doesn't work at all:

Azure Container Service is unable to provision an AKS cluster in westus2, due to an operational threshold. Please try again later or use an alternate location. For more details please refer to: https://github.com/Azure/AKS/blob/master/preview_regions.md.

@acesyde
Copy link

acesyde commented Nov 26, 2017

Hi,

Same here today, I created an aks 1.8.1 on westeurope and it's ok, but one hour later I upgraded to 1.8.2 and since

Unable to connect to the server: net/http: TLS handshake timeout

kubectl 1.8.0 and 1.8.4 same error.

After that I cant create new aks on westeurope location cli return this

cmd :
az aks create -n saceaks -g saceaks --location westeurope --kubernetes-version 1.8.1 --node-vm-size=Standard_DS1_V2 --node-count=2

cli error

Exception in thread AzureOperationPoller(b39cfa6a-a15e-49e4-9684-9cff4a0b579b):
Traceback (most recent call last):
  File "/opt/az/lib/python3.6/site-packages/msrestazure/azure_operation.py", line 377, in _start
    self._poll(update_cmd)
  File "/opt/az/lib/python3.6/site-packages/msrestazure/azure_operation.py", line 464, in _poll
    raise OperationFailed("Operation failed or cancelled")
msrestazure.azure_operation.OperationFailed: Operation failed or cancelled

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/az/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/opt/az/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/az/lib/python3.6/site-packages/msrestazure/azure_operation.py", line 388, in _start
    self._exception = CloudError(self._response)
  File "/opt/az/lib/python3.6/site-packages/msrestazure/azure_exceptions.py", line 148, in __init__
    self._build_error_data(response)
  File "/opt/az/lib/python3.6/site-packages/msrestazure/azure_exceptions.py", line 164, in _build_error_data
    self.error = self.deserializer('CloudErrorRoot', response).error
  File "/opt/az/lib/python3.6/site-packages/msrest/serialization.py", line 992, in __call__
    value = self.deserialize_data(raw_value, attr_desc['type'])
  File "/opt/az/lib/python3.6/site-packages/msrest/serialization.py", line 1143, in deserialize_data
    return self(obj_type, data)
  File "/opt/az/lib/python3.6/site-packages/msrest/serialization.py", line 998, in __call__
    return self._instantiate_model(response, d_attrs)
  File "/opt/az/lib/python3.6/site-packages/msrest/serialization.py", line 1090, in _instantiate_model
    response_obj = response(**kwargs)
  File "/opt/az/lib/python3.6/site-packages/msrestazure/azure_exceptions.py", line 59, in __init__
    self.message = kwargs.get('message')
  File "/opt/az/lib/python3.6/site-packages/msrestazure/azure_exceptions.py", line 105, in message
    value = eval(value)
  File "<string>", line 1, in <module>
NameError: name 'resources' is not defined

{
  "id": null,
  "location": null,
  "name": "e0ecdbcf-dffd-6b43-81fa-85f6517448a6",
  "properties": null,
  "tags": null,
  "type": null
}

@kahootali
Copy link

Having the same issue. I have two clusters, One East US and other Central US,
The central US works fine but when I switch context to East US, it gives the error
Unable to connect to the server: net/http: TLS handshake timeout

@relferreira
Copy link

I'm having the same issue after downscaling my cluster in East US!

@hanzenok
Copy link

Hi everyone,

Having same issue today on westeurope. And when I try to create a new cluster in this location, it gives an error:
Deployment failed. Correlation ID: <id>. Azure Container Service is unable to provision an AKS cluster in westeurope, due to an operational threshold. Please try again later or use an alternate location. For more details please refer to: https://github.com/Azure/AKS/blob/master/preview_regions.md.

@garystafford
Copy link

garystafford commented Dec 6, 2017

Still an issue. Any resolution? This is my third running cluster I have lost the ability to communicate with, in East US. Doing an upgrade or scaling up the nodes does not work properly - a complete deal breaker when considering AKS. Either of these commands results in Unable to connect to the server: net/http: TLS handshake timeout. I've tried numerous commands, restarting nodes, etc. Nothing seems to recover the cluster access.

Command to create:

az aks create `
  --name AKS-Cluster-VoterDemo `
  --resource-group RG-EastUS-AKS-VoterDemo `
  --node-count 1 `
  --generate-ssh-keys `
  --kubernetes-version 1.8.2

Perfectly healthy.

Command to scale up:

az aks scale `
  --name AKS-Cluster-VoterDemo `
  --resource-group RG-EastUS-AKS-VoterDemo `
  --node-count 3

Result: Unable to connect to the server: net/http: TLS handshake timeout

@wtam
Copy link

wtam commented Dec 6, 2017

I encounter the same TLS handshake timeout connection issue after I manually scale the node count from 1 to 2! My cluster is in Central US What's wrong?

@slack
Copy link
Contributor

slack commented Dec 20, 2017

Thanks for your patience through our preview.

We've had a few bugs in scale and upgrade paths that prevented the api-server from passing its health check after upgrade and/or scale. A number of bug fixes in this area went out over the last few weeks that have made upgrades more reliable.

Last week, for clusters in East US, we had an operational issue that impacted a number of older customer clusters between 12/11 13:00PST and 12/12 16:01PST.

Health and liveness of the api-server is now much better. If you haven't upgraded recently I'd recommend issuing az aks upgrade, even to the same kubernetes-version, as that will push the latest configuration to clusters. This rollout step is currently being automated and should be transparent in the future.

@slack slack closed this as completed Dec 20, 2017
@acesyde
Copy link

acesyde commented Dec 20, 2017

@slack thank you it work ;)

@wtam
Copy link

wtam commented Dec 21, 2017

@slack Confirm upgrading the cluster to 1.8.2 get the Kubectl connect again

@aleksen
Copy link

aleksen commented Jan 5, 2018

@slack Having the same problem still after upgrading to 1.8.2 in westeurope. Is there a problem in that region?

@douglaswaights
Copy link

After downgrading to 2.0.23 i was able to install the cluster but after getting the credentials downloaded I also have the same problem in westeurope...

kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout

doing an az aks upgrade to 1.8.2 failed for me too incidentally.

@jakobli
Copy link

jakobli commented Jan 8, 2018

running into the same issue, cluster in West Europe, upgrade to 1.8.2 fails with: Deployment failed. Correlation ID: 858d3cf0-0d4e-417d-a2ee-22f627892e51. Operation failed with status: 200. Details: Resource state Failed

@raycrawford
Copy link

I am getting the TLS handshake error at 2:30 PM EST in East US:

kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout

@4c74356b41
Copy link

Also, for me kubectl\api calls from my laptop do not work, they work from Azure Cloud Shell only.

@mdavis-xyz
Copy link

I discovered the cause of my issue. In the portal my AKS cluster is still listed as "Creating...". It's been like that for several days now.

I tried a different region, with the default VM size, and that worked. It still took a long time to go from "Creating..." to normal, but it did get there eventually. Then all the subsequent commands worked.

@necevil
Copy link

necevil commented Jun 6, 2018

Solution for me was to scale the Cluster nodes up by 1 (temporarily) and then once the new load launch connect. I was then successful and could scale the cluster down to the original size.

Full background can be found over here:
https://stackoverflow.com/questions/50726534/unable-to-connect-net-http-tls-handshake-timeout-why-cant-kubectl-connect

@emanuelecasadio
Copy link

Same problem here. Sometimes I just cannot use kubectl.

@novitoll
Copy link

novitoll commented Jul 5, 2018

@emanuelecasadio AKS is now in GA. Make sure you either upgraded or have necessary patches installed.

@SnehaJosephSTS
Copy link

I am still facing this issue while running the "kubectl get nodes" command. I have tried the following but with no luck :(

  1. Upgrading Kubernetes Version
  2. Increasing node count via portal and then running "kubectl get nodes".
  3. Re-logging via "az login"

@c-mccutcheon
Copy link

@SnehaJosephSTS - we had to re-create our cluster after AKS went GA. Haven't had the issue since then. Upgrade for us did not work, nor did scaling.

@danielrmartin
Copy link

I am getting the error this morning. while trying to get nodes on a new cluster in eastus.

@jawahar16
Copy link

jawahar16 commented Jul 24, 2018

I am getting the same issue in eastus. I enabled "rbac" with the AKS create command.,

az aks create --resource-group my-AKS-resource-group --name my-AKS-Cluster --node-count 3 --generate-ssh-keys --enable-rbac

kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout

@dtzar dtzar mentioned this issue Aug 6, 2018
@qike-ms
Copy link
Contributor

qike-ms commented Aug 10, 2018

There are many reasons behind TLS handshake timeout error. For clusters created before AKS GA, we highly recommend customers to create a new cluster and redeploy the system there.

We also recommend customer to upgrade clusters to stay to the latest or one version before latest supported K8S version.

Also make sure your cluster is not overloaded, meaning you didn't max out usable cpu and memory on the agent nodes. We've seen many times when someone scale cluster down from X nodes to 1, X being 5 or above, interruption to connection to control plane can happen as they might be running a lot of pods on the cluster and now all of them will be evicted and redeployed to the only node left. And if the node vm is very small, it can leave pods no place to schedule, including some mission critical pods (addons in kube-system)

If after all the diagnosis you still suffer from this issue, please don't hesitate to send email to [email protected]

@mdavis-xyz
Copy link

mdavis-xyz commented Aug 14, 2018

And if the node vm is very small, it can leave pods no place to schedule, including some mission critical pods

Isn't that a very big issue?

I've had many cluster break irreparably in this way.
This bug doesn't just happen when scaling to 1. I've seen it happen when scaling nodes both up and down whilst there are too many pods.
In my experience, AKS scaling when there are unsheduled pods tends to cause the cluster to break catastrophically, more often than not.
The workaround is to delete the whole cluster and redeploy on a new one.

Thankfully I'm not dealing with a production workload, but imagine if I was. I'd be livid.
I don't think I would ever choose to deploy a real production workload on AKS, because of this bug.

Is it possible to somehow get the scheduler to prioritise the system pods over the workload pods?

@agolomoodysaada
Copy link
Contributor

agolomoodysaada commented Sep 26, 2018

After lots of back and forth with Azure support, we reached to this workaround. I have yet to try it as they fixed the issue on their end. However, it might help someone else facing this.

Anyway, here's their message:

This is usually means that tunnelfront cannot connect to tunnelend

1. ssh to the agent node which running the tunnelfront pod
2. get tunnelfront logs: "docker ps" -> "docker logs <tunnelfront_container_id>"
3. "nslookup <ssh-server_fqdn>" whose fqdn can be get from above command -> if it resolves ip, which means dns works, then go to the following step
4. "ssh -vv azureuser@<ssh-server_fqdn> -p 9000" ->if port is working, go to the next step
5. "docker exec -it <tunnelfront_container_id> /bin/bash", type "ping [google.com](http://google.com/)", if it is no response, which means tunnel front pod doesn't have external network, then do following step
6. restart kube-proxy, using "kubectl delete po <kube-proxy_pod> -n kube-system", choose the kube-proxy which is runing on the same node with tunnelfront. customer can use "kubectl get po -n kube-system -o wide"

P.S.
Dear Azure team,

We should NOT close this issue as the bug still occurs from time to time. This is not considered an acceptable workaround. It's a mitigation for those whose clusters are stuck and cannot access logs, exec, or helm deployments. We still need a permanent fix designed for failure of either tunnelfront or tunnelend.

Would be nice if you could also explain what tunnelfront and tunnelend are and how they work. Why are we, consumers of AKS, responsible for maintaining Azure's buggy workloads?

@Starefossen
Copy link

Created a new cluster after GA and now out of a sudden getting a bunch of TLS handshake timeout from AKS. This does not give the feeling that AKS is anything near GA.

@jaredallard
Copy link

jaredallard commented Oct 4, 2018

Yeah we run into this frequently, AKS master node availability is terrible. Constantly going down, timing out requests (nginx-ingress, even some of our applications that talk to k8s)... We don't run into any of these issues with GKE or kops environments. Not sure if this is anywhere near GA.

EDIT As I wrote this, our cluster has been unavailable for the last 20+ minutes saying "TLS handshake timeout". 😒

@mcobzarenco
Copy link

mcobzarenco commented Oct 5, 2018

I set up a cluster with one node and I wanted to investigate differences to the GCP deployement, essentially do a dry run (our production deployment is on Google Cloud's Kubernetes but we're doing an Azure deployment for a client).

However, it seems like all kubectl operations as well as az browse fail with:

net/http: TLS handshake timeout

az version 2.0.46
kubectl 1.9.7

@klarose
Copy link

klarose commented Oct 10, 2018

I just had this happen to myself. It appeared out of the blue, then went away a few hours later, after I restarted my nodes a few times, as well as killed most of my deployments. I'm not sure if that's what fixed it, or if whatever was truly causing the issue just went away.

Some notes from my investigation:

  • Curl to the host @ port 443 used by my kube-config would get through until it failed the TLS handshake. However, the instant I put a proper API call in there, it would usually either fail to connect at all, or timeout mid-handshake. Every once in a while it would get through and fail the handshake. This mirrored the behaviour I saw with kubectl. This makes me suspect that there is an issue with whatever backend service the api calls are routed to.

  • The same curl from a node itself would also hang.

  • After restarting the nodes, kubectl would resume being responsive for a bit, then start hanging again.

  • I wasn't doing anything terribly complicated on the nodes. They averaged around 20% cpu load. Fairly low network traffic.

  • I'm using istio and helm/tiller

  • 2-node cluster.

  • kubectl version 1.12.0

  • kubernetes version 1.10.7

  • canadacentral

@blackbaud-brandonstirnaman

Brand new cluster today... been online for just a few hours and TLS handshake timeouts. 👎

@emirhosseini
Copy link

Any update on this issue? We're still experiencing it

@siyangy
Copy link

siyangy commented Dec 5, 2018

We've been hitting this for a year, and the explanation earlier was that we were using a preview version of AKS cluster. Now we've moved to a new cluster (supposed after GA) and are still seeing it. I think it's worth bumping the priority as the issue has been around for a long while and is affecting a lot of folks.

@ghost
Copy link

ghost commented Jan 7, 2019

I've found the solution!!!

@4c74356b41
Copy link

and that is? @adamsem

@ghost
Copy link

ghost commented Jan 8, 2019

Migrate to AWS :)

@jnoller
Copy link
Contributor

jnoller commented Feb 9, 2019

Hi Everyone;

AKS has rolled out a lot of enhancements and improvements to mitigate this including auto-detection of hung/blocked API servers, kubelets and proxies. One of the final components is to scale up the Master components to meet the overall workload load against the master APIs.

This issue (this github issue) contains a lot of cluster-specific reports - as we can not safely request the data for your accounts to do deeper introspection here on github, I'd ask if you could please file Azure technical support issues for diagnosis (these support issues get routed to our back end on-call team as needed for resolution).

Additionally, the errors displayed can also correlate to underlying service updates in some cases (especially if you are seeing it randomly, for a limited amount of time). This will be helped with the auto scaling (increased master count) being worked on.

For issues that come up after I close this, please file new github issues that include instructions for re-creation on any AKS cluster (e.g. general not-tied-to-your-app-or-cluster). This will help support and engineering debug.

@jnoller jnoller closed this as completed Feb 9, 2019
@sanojdev89
Copy link

sanojdev89 commented Sep 12, 2019

kubectl get pods --insecure-skip-tls-verify=true gives below error
Unable to connect to the server: net/http: TLS handshake timeout
Build step 'Execute shell' marked build as failure
this command works on jenkins server but fails while running via a jenkins job

@ghost ghost locked as resolved and limited conversation to collaborators Aug 13, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests