Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scaling up local storage #175

Open
NicolasT opened this issue Jul 20, 2018 · 5 comments
Open

Scaling up local storage #175

NicolasT opened this issue Jul 20, 2018 · 5 comments
Labels
kind:design Solution design choices topic:docs Documentation topic:operations Operations-related issues topic:storage Issues related to storage

Comments

@NicolasT
Copy link
Contributor

From ZENKO-792:

How can we scale storage? Metal-k8s deploys on drives using: metal_k8s_lvm defined in kube-node.yml as stated in https://metal-k8s.readthedocs.io/en/latest/usage/quickstart.html. Need the functionality of adding more storage on an already deployed metal-k8s and zenko cluster(probably with a playbook).

@NicolasT NicolasT added kind:design Solution design choices topic:storage Issues related to storage topic:operations Operations-related issues labels Jul 20, 2018
@NicolasT
Copy link
Contributor Author

Note: the description above refers to pre-0.2 configuration which changes with #94 and #153.

Extending storage can be achieved in 2 ways:

  • Add new disks to the set of disks backing a previously deployed VG, and re-run the storage deployment
  • Add a new storage pool (i.e. a new LVM VG with new disks, define LVs,...), and re-run the storage deployment

Re-running the storage deployment can be achieved using one of the following ways (and maybe others):

  • Re-run the whole deploy playbook
  • Re-run the deploy playbook with --tags storage (note: this is not necessarily a stable 'API' yet)
  • Run the storage-pre and storage-post playbooks (which means we may want to have another playbook, storage, which basically runs those two)

The above is currently not a tested scenario, so we should add this to the test-plan at some point and automate.

@anurag4DSB
Copy link

Hello Nicolas,

Thank you for responding to this.
So is this what you are suggesting:
ansible-playbook -i inventory/quickstart-cluster -b playbooks/deploy.yml
ansible-playbook -i inventory/quickstart-cluster -b playbooks/storage-pre.yml
ansible-playbook -i inventory/quickstart-cluster -b playbooks/storage-post.yml

Can you also give an example of adding new metal_k8s_lvm files?

@NicolasT
Copy link
Contributor Author

No, not exactly (although it shouldn't do any harm): the deploy playbook already runs storage-pre and storage-post. The list of options I presented was

one of the following ways

W.r.t. adding new VGs, this is currently being overhauled completely so we'll work on documenting this feature once (or actually after, in this case) the changes land (cfr. #153).

@NicolasT NicolasT added this to the MetalK8s 1.1.0 milestone Aug 8, 2018
@NicolasT NicolasT added the topic:docs Documentation label Sep 24, 2018
@NicolasT
Copy link
Contributor Author

This is being documented in #400 / #385.

@gdemonet gdemonet added the legacy Anything related to MetalK8s 1.x label Feb 4, 2020
@thomasdanan thomasdanan removed the legacy Anything related to MetalK8s 1.x label Apr 7, 2020
@thomasdanan thomasdanan removed this from the MetalK8s 1.1.0 milestone Apr 7, 2020
@thomasdanan
Copy link
Contributor

#1997

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind:design Solution design choices topic:docs Documentation topic:operations Operations-related issues topic:storage Issues related to storage
Projects
None yet
Development

No branches or pull requests

4 participants