Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

Improve persistence of IPAM data #2610

Closed
awh opened this issue Nov 8, 2016 · 6 comments
Closed

Improve persistence of IPAM data #2610

awh opened this issue Nov 8, 2016 · 6 comments

Comments

@awh
Copy link
Contributor

awh commented Nov 8, 2016

From @bboreham on October 6, 2016 9:30

Currently weave-daemonset.yaml specifies an emptyDir volume which is deleted when the pod is deleted. Since upgrading weave-kube require deleting the pods, this is unsafe.

Reading http://kubernetes.io/docs/user-guide/volumes/, I don't see an obviously better choice.

Copied from original issue: weaveworks-experiments/weave-kube#27

@awh
Copy link
Contributor Author

awh commented Nov 8, 2016

From @Bregor on October 6, 2016 10:5

I think regular hostPath will suit this as well.
Example:

      volumeMounts:
        - mountPath: /weave_data
          name: data
...
  volumes:
    - name: data
      hostPath:
        path: /var/lib/weave-kube

@awh
Copy link
Contributor Author

awh commented Nov 8, 2016

From @bboreham on October 6, 2016 10:7

Sure, but is there a hostPath we can guarantee to be able to write to on every Linux distro?

CoreOS, for instance, mounts loads of things read-only.

@awh
Copy link
Contributor Author

awh commented Nov 8, 2016

From @Bregor on October 6, 2016 10:9

CoreOS mounts /usr for read-only, but /var and /opt are available to be overwritten.

@awh
Copy link
Contributor Author

awh commented Nov 8, 2016

@awh
Copy link
Contributor Author

awh commented Nov 8, 2016

From @Bregor on October 12, 2016 9:38

Petset without "normal" network fs is bloody hell. Regular hostPath will be much better for this task I mean.

@bboreham
Copy link
Contributor

bboreham commented Feb 8, 2017

I changed this from a feature to a bug because we have more reports of trouble after deleting all Weave pods.

Note that we need to give some thought to whether a user would ever need to really clear out the data, after we made it more persistent, and if so how they would do that. I.e. what is the equivalent of weave reset on Kubernetes?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants