-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support creating backups in kind clusters #4962
Comments
Please add --default-volumes-to-restic option in your backup command |
I ran The log output doesn't appear any different:
|
Based on this block in item_backupper.go` it looks like Restic is used only when Pods are in the list of resource types to be backed up: var (
backupErrs []error
pod *corev1api.Pod
resticVolumesToBackup []string
)
if groupResource == kuberesource.Pods {
// pod needs to be initialized for the unstructured converter
pod = new(corev1api.Pod)
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), pod); err != nil {
backupErrs = append(backupErrs, errors.WithStack(err))
// nil it on error since it's not valid
pod = nil
} else {
// Get the list of volumes to back up using restic from the pod's annotations. Remove from this list
// any volumes that use a PVC that we've already backed up (this would be in a read-write-many scenario,
// where it's been backed up from another pod), since we don't need >1 backup per PVC.
for _, volume := range restic.GetPodVolumesUsingRestic(pod, boolptr.IsSetToTrue(ib.backupRequest.Spec.DefaultVolumesToRestic)) {
if found, pvcName := ib.resticSnapshotTracker.HasPVCForPodVolume(pod, volume); found {
log.WithFields(map[string]interface{}{
"podVolume": volume,
"pvcName": pvcName,
}).Info("Pod volume uses a persistent volume claim which has already been backed up with restic from another pod, skipping.")
continue
}
resticVolumesToBackup = append(resticVolumesToBackup, volume)
}
// track the volumes that are PVCs using the PVC snapshot tracker, so that when we backup PVCs/PVs
// via an item action in the next step, we don't snapshot PVs that will have their data backed up
// with restic.
ib.resticSnapshotTracker.Track(pod, resticVolumesToBackup)
}
}
Now I see some of the expected log output:
If someone could offer me a bit of guidance, I would be happy to contribute a PR for this! |
@jsanda yes, the restic backup is called PodVolumeBackup formally, therefore, it works on pods, this is the expected behavior. Go back to the original problem of this issue:
Practically, we don't use issues to discuss development or PR, if you want to contribute to Velero, please join the "velero-dev" slack channel. Thanks. |
I am not interested in supporting hostPath in general, only for the purposes of dev/testing. This isn't exclusive to kind. If I were to use k3d for example, I would hit this same issue as it also uses local-path-provisioner. Other volumes do exist under
3053 was closed as a duplicate of 2767. Here's why I think this is a different issue. It seems to me that the restic server needs volume mounts for both If my analysis is anywhere near correct I will happily move the discussion to the velero-dev channel :) If my analysis is wrong, then it doesn't look like I will be able to use velero with kind :( |
@jsanda Got you. The problem is similar to 3053/2767, that requires to support non-standard mount path. However, it is more complicated than them and cannot be solved by their solution:
Therefore, Velero needs to support to search /var/lib/kubelet/pods and /var/local-path-provisioner and the same time for Restic backup. The requirement is clear now, so we can keep this issue open. |
Just to add to this: local-path-provisioner in itself supports dynamic mapping based on the nodes (https://github.com/rancher/local-path-provisioner#customize-the-configmap), so you cannot simply rely on /var/local-path-provisioner. Having said that, is there a reason we cannot rely on the persistent volume data in case of the hostpath? I.e. this is an excerpt from one of my pv which I would like to backup: apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: cluster.local/local-path-provisioner
creationTimestamp: "2023-02-02T22:49:03Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-b687054c-0ce8-4443-aeed-0cdacc7fcf86
resourceVersion: "313243510"
uid: 232120db-6396-4dee-a598-89526910d771
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: home-assistant
namespace: hass
resourceVersion: "313243426"
uid: b687054c-0ce8-4443-aeed-0cdacc7fcf86
hostPath:
path: /4t/k8s-data/local-path-prov/pvc-b687054c-0ce8-4443-aeed-0cdacc7fcf86_hass_home-assistant
type: DirectoryOrCreate
as you can see, file location are clearly defined in |
@alekc |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands. |
This issue was closed because it has been stalled for 14 days with no activity. |
Describe the problem/challenge you have
I am unable to backup PVs in kind clusters. I understand that Restic backups are not supported out of the box with hostPath volumes. The project I work on (as well as many others) use kind clusters extensively for development and testing. kind uses local-path-provisioner for dynamic volume provisioning. These are hostPath volumes.
I have velero installed with the restic daemonset. I setup the minio storage provider. Here is an example command line I used for creating a backup:
velero backup create backup-4 --include-namespaces k8ssandra-operator --include-resources PersistentVolumeClaims,PersistentVolumes --snapshot-volumes=false
The backup completes without error, but the contents of the PV are not backed up.
Here is the relevant part of the logs:
I was expecting to see this warning in backupper.go about hostPath volumes not being supported get logged.
Here is the spec of the PV to confirm it is a hostPath volume:
Describe the solution you'd like
I would like backups to be supported with kind clusters. We don't use kind for production deployments, but we use it extensively for dev/testing. I have been working on a prototype to integrate velero into my project. I did my initial dev/testing with GKE. Now I am blocked since backups don't work with kind.
Anything else you would like to add:
See #3053 for related discussion.
Environment:
velero version
): v1.8.1kubectl version
): 1.22.7/etc/os-release
): Ubuntu 21.10Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: