Skip to content
This repository has been archived by the owner on Oct 22, 2024. It is now read-only.

merge pmem-ns-init and pmem-vgm into pmem-csi-driver #532

Closed
okartau opened this issue Feb 18, 2020 · 6 comments · Fixed by #688
Closed

merge pmem-ns-init and pmem-vgm into pmem-csi-driver #532

okartau opened this issue Feb 18, 2020 · 6 comments · Fixed by #688
Assignees
Labels
0.8 needs to be fixed in 0.8.x

Comments

@okartau
Copy link
Contributor

okartau commented Feb 18, 2020

The information shown in those setup stages (these run in LVM mode only) can be helpful for debugging issues and monitoring storage devices state.

@okartau okartau self-assigned this Feb 18, 2020
@okartau
Copy link
Contributor Author

okartau commented Feb 18, 2020

Currently shown output is collected from running driver pods, using e2e framework methods, if I get this correctly. As pmem-ns-init and pmem-vgm do not produce output during the tests run, but during pods startup only, does it make sense not to go into e2e framework with this, but rather to add separate commands:

  • kubectl logs PODNAME pmen-ns-init
  • kubectl logs PODNAME pmen-vgm

in testing script after pods creation, executed in LVM mode cases only?

@pohly
Copy link
Contributor

pohly commented Feb 18, 2020

Where would you put this? The pods do not necessarily have completed yet after "make start".

We could add it to WaitForPMEMDriver in test/e2e/e2e.go.

@pohly
Copy link
Contributor

pohly commented Feb 18, 2020

Before we add more special cases, let's consider the alternative: we could move the functionality of those separate binaries into the main driver binary itself. This has several advantages:

  • we can get rid of the init containers, which simplifies deployment and debugging
  • container image becomes smaller (there's duplicate code linked into each of the three binaries)

What advantage(s) does it have to have three different binaries?

@okartau
Copy link
Contributor Author

okartau commented Feb 18, 2020

I recall very early design was just as you propose, a monolithic one, handling ns-init and vgm parts as function calls. Then there was idea to separate init.stages with the reasoning like:

  • modular design, smaller entities
  • lets have driver dealing with CSI, and initcontainers handling separate housekeeping which is not CSI related
  • those init.stages are needed only in LVM mode, so it is possible to leave those parts totally out in direct mode via deployment-level config.
  • was one idea that in LVM mode only the init.part has to deal with libndctl operations (as the main driver will deal with LVM level operations), so we thought we can keep more power-hungry device-mgmt rights out of CSI driver that way ? (not so sure did this really materialize as we wished...)

I think @avalluri was main architect of change to initcontainers, so he may remember some reasons as well.

@pohly
Copy link
Contributor

pohly commented Feb 18, 2020

If those were the goals, then it's probably time to revisit that decision: if we had different images, then it might have resulted in a smaller non-LVM image (but the LVM image might have become larger as a result of linking the Go runtime into multiple binaries). But we don't, and I don't think that "smaller size" is worth the extra complexity, so let's pick the best solution for the single-image scenario.

@pohly pohly added the future needs to be fixed in some future release label May 18, 2020
@pohly pohly changed the title e2e testing output does not include logs from pmem-ns-init and pmem-vgm merge pmem-ns-init and pmem-vgm into pmem-csi-driver Jun 18, 2020
@pohly pohly assigned avalluri and unassigned okartau Jun 18, 2020
@pohly pohly added 0.8 needs to be fixed in 0.8.x and removed future needs to be fixed in some future release labels Jun 18, 2020
avalluri added a commit to avalluri/pmem-CSI that referenced this issue Jun 22, 2020
As we are using 'shared/Bidirectional' mounts for LVM driver also, we
could create the needed physical devices(namespaces, volume groups) in
the same driver session. This invalidates the usage of init-containers.

Few advantages of merging the init-containers functionality to driver code:
 - smaller container image size
 - allows us to implement features that are common to both modes(ex:
 pmemPercentage)

FIXES intel#532
@avalluri
Copy link
Contributor

I think @avalluri was main architect of change to initcontainers, so he may remember some reasons as well.

The intention of init-containers usage was when we were not using the 'bidirectional' mounts for the driver in LVM mode. But we moved to shared/bidirectional mounts for both the driver modes, so we could drop those init containers and merge the functionality to the driver's LVM device manager.

avalluri added a commit to avalluri/pmem-CSI that referenced this issue Jun 23, 2020
As we are using 'shared/Bidirectional' mounts for LVM driver also, we
could create the needed physical devices(namespaces, volume groups) in
the same driver session. This invalidates the usage of init-containers.

Few advantages of merging the init-containers functionality to driver code:
 - smaller container image size
 - allows us to implement features that are common to both modes(ex:
 pmemPercentage)

FIXES intel#532
avalluri added a commit to avalluri/pmem-CSI that referenced this issue Jun 27, 2020
As we are using 'shared/Bidirectional' mounts for LVM driver also, we
could create the needed physical devices(namespaces, volume groups) in
the same driver session. This invalidates the usage of init-containers.

Few advantages of merging the init-containers functionality to driver code:
 - smaller container image size
 - allows us to implement features that are common to both modes(ex:
 pmemPercentage)

FIXES intel#532
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
0.8 needs to be fixed in 0.8.x
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants