-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restore in GKE is not working as expected, folder called "mount" is created and all the content is restored inside this folder #5149
Comments
I’ve followed your step of backup & restore redis example, there is no mount directory when I restored the whole redis cluster. Is there some steps that I’m missing when reproducing? @mcortinas |
seriouslly?! i tried two times with redis and finally i also checked with mysql once and i saw mount folder all the times... I always did this from one gcp_project/gke restoring to another gcp_project/gke using the same gcs bucket wth Restic.... maybe I'm doing something bad.... I shared all the logs in this issue, could you help me checking the logs if you can see something wrong restoring with restic? do you know if I can share something more in my side? |
@mcortinas it's really strange. Also I created a flag file in through the log you provided, I cannot find something about creating mount directory |
maybe in my scenario the origin and the target is a different gpc projects and gke clusters, both are using the same gcs bucket and the same IAM roles, maybe this is the difference... |
@mcortinas |
@mcortinas
PV is mounted to a sub-directory called |
@mcortinas Velero Restic doesn't adopt that. It still goes for I think your case is that backing up on a k8s cluster that enabled CSI migration or using CSI in volume, and restore into a k8s cluster didn't enable CSI migration yet. I am working on a fix, and it should be included in Velero v1.10. |
Let‘s make sure this is fixed in v1.9.1 |
Some related documents related to CSI provision and CSI migration. CSI migration design updated to add annotation "pv.kubernetes.io/migrated-to": persistent-volume-controller Dynamically provision PV with in-tree volume when CSIMigration is on has no spec.csi : Dynamically Provisioned Volumes Annotation "pv.kubernetes.io/provisioned-by": volume provisioning |
Test case ---
apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
namespace: test
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
namespace: test
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
root@jxun-jumpserver:~# cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
namespace: test
spec:
selector:
matchLabels:
app: hello-app
template:
metadata:
labels:
app: hello-app
spec:
containers:
- name: hello-app
image: nginx
args: [ "sleep", "3600" ]
volumeMounts:
- name: sdk-volume
mountPath: /usr/share/hello/
- name: empty
mountPath: /usr/share/empty/
volumes:
- name: sdk-volume
persistentVolumeClaim:
claimName: my-pvc
- name: empty
emptyDir: {} wget https://github.com/vmware-tanzu/velero/releases/download/v1.9.1-rc.2/velero-v1.9.1-rc.2-linux-amd64.tar.gz
tar zxvf velero-v1.9.1-rc.2-linux-amd64.tar.gz
cp velero-v1.9.1-rc.2-linux-amd64/velero /usr/local/bin/
velero install \
--provider gcp \
--bucket jxun \
--secret-file ~/Documents/credentials-velero-gcp \
--image velero/velero:v1.9.1-rc.2 \
--plugins velero/velero-plugin-for-gcp:v1.5.0 \
--use-restic
velero backup create restic-csi-migration --include-namespaces=test --default-volumes-to-restic
velero restore create --from-backup restic-csi-migration --namespace-mappings=test:test1 |
Hi, this sounds great! thank you @blackpiglet ! |
Started working on this in slack community, let me share the thread https://kubernetes.slack.com/archives/C6VCGP4MT/p1658307341547949
previous issues related in this other slack channel https://kubernetes.slack.com/archives/C6VCGP4MT/p1658003621239549
What steps did you take and what happened:
Basically I'm trying to do a backup in one GKE cluster (K8s in GCP) in one GCP project and I'm trying to restore in other GCP project.
My backup and restore it in one k8s namespaces and I want to restore Redis and Elasticsearch mainly.
Origin GKE in one gpc project
velero backup create redis-restic --include-namespaces redis-restic -n velero-restic
Target GKE in other gcp project
velero restore create --from-backup redis-restic -n velero-restic
Both gke clusters has been sharing the same GCS bucket and the installation procedure, described below this.
I saw this bad behavior applying in two examples, redis and mysql-galera
Example1 : redis
Example 2: mariadb-galera
What did you expect to happen:
Restore all the objects in the namespaces and also all the PV restoring each pv from the restic repository respecting the same hierarchy of the source, it means restoring all the content in the root path on the PV mounted in the POD, NOT inside a folder created and called
mount
.Let me share a screenshot of one of Redis PODS, this screenshot describes very well my issues
Environment:
velero version
):Client: Version: v1.9.0 Git commit: 6021f148c4d7721285e815a3e1af761262bff029 Server: Version: v1.9.0
velero client config get features
):features: <NOT SET>
kubectl version
):Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:30:46Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.12-gke.1500", GitCommit:"6c11aec6ce32cf0d66a2631eed2eb49dd65c89f8", GitTreeState:"clean", BuildDate:"2022-05-11T09:25:37Z", GoVersion:"go1.16.15b7", Compiler:"gc", Platform:"linux/amd64"} WARNING: version difference between client (1.24) and server (1.21) exceeds the supported minor version skew of +/-1
Google kubernetes Engine
velero install \ --use-restic \ --provider gcp \ --plugins velero/velero-plugin-for-gcp:v1.5.0 \ --namespace velero-restic\ --bucket edo-platform-lab01-velero-marc1 \ --use-volume-snapshots=false \ --default-volumes-to-restic \ --secret-file ./credentials-velero
/etc/os-release
):Redis PODS:
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)"
K8s NODES
VERSION OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME v1.20.12-gke.1500 Container-Optimized OS from Google 5.4.144+ docker://20.10.3
Restore Logs:
Let me attache the budle file from
velero debug --backup redis --restore redis-20220725112348 -n velero-restic
bundle-2022-07-25-11-31-31.tar.gz
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: