-
Notifications
You must be signed in to change notification settings - Fork 693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EKS authentication requires newer client-go #110
Comments
Interesting, thanks for the report. As background explanation: So, your options are:
(*) The public key (via any method) is the text between and including It doesn't matter which user you use to get the public key, it's the same public key for all users of that cluster. So typically the workflow would be: admin installs sealed-secret controller, admin runs For this bug:It would be nice if we could make this easier for future users of EKS. I haven't used EKS - how did you get these credentials to the EKS cluster? Is it going to be typical for EKS users to use |
My kubeconfig is setup using eksctl utils write-kubeconfig my-cluster --profile my-profile Relevant code is here: https://github.com/weaveworks/eksctl/blob/master/cmd/eksctl/utils.go#L85-L164 EKS requires |
I'm able to view logs for the controller pod and get the certificate that way. I'll try updating RBAC tomorrow. |
My temporary solution: kubeseal --fetch-cert --token $(aws-iam-authenticator token -i my-cluster | jq -r '.status.token') |
Huh, I think I originally misunderstood the issue here. This isn't about RBAC, since we should not allow So the issue here is that the API request that Meanwhile, the correct workaround is the explicit |
👍 thanks for the explanation. |
just for information: This issue obviously also pops up when you're trying to seal a secret:
I wasted a lot of time to find out what the problem is here, until I found this issue. Maybe you could mention the workaround in the README? |
Hi guys, do you have any ETA for updating client-go lib and rebuild?
|
also includes version bumps of k8s.io/api and k8s.io/apimachinery to release-1.10 with a new vendor of github.com/json-iterator/go due to an issue (kubernetes/apimachinery#46) with client-go v7.0 addresses bitnami-labs#110
No. As currently written, You only need this for fetching the public key though - you can run |
also includes version bumps of k8s.io/api and k8s.io/apimachinery to release-1.10 with a new vendor of github.com/json-iterator/go due to an issue (kubernetes/apimachinery#46) with client-go v7.0 addresses bitnami-labs#110
126: client-go version bump to release-7.0 r=anguslees a=jipperinbham also includes version bumps of k8s.io/api and k8s.io/apimachinery to release-1.10 with a new vendor of github.com/json-iterator/go due to an issue (kubernetes/apimachinery#46) with client-go v7.0 addresses #110 Co-authored-by: JP Phillips <[email protected]>
It looks like this has been resolved with #126. I just tested on an EKS cluster and it is working correctly for me. |
I found this and tried the "hack" i keep crashing tho for an AWS EKS cluster.
|
I'm getting same error as @NeoTech EDIT: Managed workaround. Use local secret: EDIT2: Still not working. output is supposed to be a pem file but I'm getting yaml output. But my workaround to use entirely local workflow is valid just --fetch-cert doesn't work correctly |
Same issue here:
|
|
In case anyone else comes across this issue, besides all the tremendously useful answers above, here is a practical summary of what to look for in order to get this going: The second catch, is that So in short, you may have to clone the repo,
So I had to do
to get the cert. |
As an extra comment, I got so involved in debugging this that I forgot what the original issue is:
Again, by cloning and compiling the kubeseal binary myself (again, this inlcudes the bump to v0.7.0 of client-go) I was the able to seal a secret as normal:
|
@alejandrox1 When do we expect an official release? |
Would be pretty cool if there could be a new release cut with #126 included to resolve the issues of fetching the cert w/kubectl proxy. This also affects encrypting secrets (must download cert locally- and For reference, this is the error I received: I'm now writing a workaround into my wrapper to use port-forwarding to fetch the cert, save it locally, use it for encryption, then remove the cert, instead of just being able to run This did work as expected when I was using a cluster with Public IP's. Now that I am using a cluster with Private IP's, it is not, and I have to do a couple extra steps. |
@mbelang I don't know 😅 I'm not actually a maintainer of this project. |
@jgrabenstein do you use kubeseal the same way you use kubectl (or any other Kubernetes client)? |
Getting this when running on EKS Kubernetes version 1.12.
Does anyone have an update? |
If you use the git repo from master it works. They haven't done a release in a long time. Is this project still being maintained? |
These are regional, private (no external/public IP's) GKE clusters on a shared VPC. To work around that, I use Not sure my company would appreciate me sharing the source code for my wrapper.. If people here think it would help them I could possibly see if we could contribute it. I think ultimately this project needs some decision/leadership around its future - this is something that could be fixed in |
going to cut a new release soon |
Due to the fact that I have Weave CNI installed in my EKS cluster I seem to have broken the ability for
It seems like
Currently the work around that I have gotten to work is:
This can also be done in a single shell using something like the following. This was useful to me as I am trying to retrieve the public key within a CI pipeline.
Just posting this here in case it is useful to anyone else. |
Would it make sense if the sealedsecret controller posted the public key into a configmap? |
EKS authentication needs an updated client-go version Ref: bitnami-labs/sealed-secrets#110
Getting the following error when trying to use
kubeseal
with EKS.I am able to interact with the cluster normally using
kubectl
.The text was updated successfully, but these errors were encountered: