-
Notifications
You must be signed in to change notification settings - Fork 772
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switched to k8s 1.18.0 as attempt to fix issue #25 #26
Switched to k8s 1.18.0 as attempt to fix issue #25 #26
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @petermicuch! |
I just tested this now and it works. Volume gets created, after PVC deletion volume is removed and respective folder is marked as archived. I have to check with my company if it is fine to sign CLA. |
I have to do small internal training in my company to be able to sign CLA, so it will take a week or so before I can sign it (I hope not more). If anyone is willing to create another PR bringing in changes or even to bump directly to k8s 1.20.0 I am fine with that. I took versions of client-go as I have seen them in go.mod originally. |
Thank you so much for the PR! It fixed our cluster for now. For anyone still using the (unfortunately deprecated but no replacement available) image:
repository: rkevin/nfs-subdir-external-provisioner
tag: fix-k8s-1.20 This image is directly built on this PR. This is a temporary fix until this PR is merged upstream. |
Almost there with CLA signing. My company representative will sign corporation CLA hopefully today. |
CLA signed. Please check. |
@kmova can this one be approved before #29 or will this come after? And do you know if @ashishranjan738 is around for reviews or could you help review this one? |
This looks good to me @petermicuch .. I am waiting on @jackielii to confirm if the incorporated changes are good. Yes, I will hold off on #29 till this is merged. And I also need to work on a few updates to the Dockerfile in that PR. |
/lgtm |
@jackielii , @kmova is this now good to go? I have seen #29 was already merged to master. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jsafrane, petermicuch The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I do not use helm. What does this translate to in the deployment-arm.yaml definition? |
Try changing line 24 from |
I created a fork of this repository and used the patch from @petermicuch to create a multi-arch image of the nfs-subdir-external-provisioner. It works with Kubernetes 1.20.x on ARMv8 and AMD64 (I also copies the deprecated stable helm chart - to have a backup until a new offical chart will be available => https://github.com/groundhog2k/helm-charts/tree/master/charts/nfs-client-provisioner) |
Thank you @groundhog2k! I'm just getting my feet wet with kubernetes (just testing via helmfile and k3d ATM) to transition my home server from a docker-compose setup. And I appreciate the multi-arch image as well as I'm thinking I may pick up some raspberry pi's to start distributing my stuff across multiple hosts as well as keeping my AMD machine for GPU workloads. |
Guys thank you so much for this workaround, worked instantly. |
I believe there is an official |
This is my very first attempt to work with GO at all, so it definitely needs someone more experienced to check. I only installed GO today to fix this.
As right now I have no k8s cluster running version 1.20 I can only test it tomorrow. My main goal was to update dependencies and interfaces so that this runs against k8s 1.18. With that the problem described in issue #25 shall not be present. Actually already version 1.16 should do according to this post.