-
Notifications
You must be signed in to change notification settings - Fork 55
merge pmem-ns-init and pmem-vgm into pmem-csi-driver #532
Comments
Currently shown output is collected from running driver pods, using e2e framework methods, if I get this correctly. As pmem-ns-init and pmem-vgm do not produce output during the tests run, but during pods startup only, does it make sense not to go into e2e framework with this, but rather to add separate commands:
in testing script after pods creation, executed in LVM mode cases only? |
Where would you put this? The pods do not necessarily have completed yet after "make start". We could add it to |
Before we add more special cases, let's consider the alternative: we could move the functionality of those separate binaries into the main driver binary itself. This has several advantages:
What advantage(s) does it have to have three different binaries? |
I recall very early design was just as you propose, a monolithic one, handling ns-init and vgm parts as function calls. Then there was idea to separate init.stages with the reasoning like:
I think @avalluri was main architect of change to initcontainers, so he may remember some reasons as well. |
If those were the goals, then it's probably time to revisit that decision: if we had different images, then it might have resulted in a smaller non-LVM image (but the LVM image might have become larger as a result of linking the Go runtime into multiple binaries). But we don't, and I don't think that "smaller size" is worth the extra complexity, so let's pick the best solution for the single-image scenario. |
As we are using 'shared/Bidirectional' mounts for LVM driver also, we could create the needed physical devices(namespaces, volume groups) in the same driver session. This invalidates the usage of init-containers. Few advantages of merging the init-containers functionality to driver code: - smaller container image size - allows us to implement features that are common to both modes(ex: pmemPercentage) FIXES intel#532
The intention of init-containers usage was when we were not using the 'bidirectional' mounts for the driver in LVM mode. But we moved to shared/bidirectional mounts for both the driver modes, so we could drop those init containers and merge the functionality to the driver's LVM device manager. |
As we are using 'shared/Bidirectional' mounts for LVM driver also, we could create the needed physical devices(namespaces, volume groups) in the same driver session. This invalidates the usage of init-containers. Few advantages of merging the init-containers functionality to driver code: - smaller container image size - allows us to implement features that are common to both modes(ex: pmemPercentage) FIXES intel#532
As we are using 'shared/Bidirectional' mounts for LVM driver also, we could create the needed physical devices(namespaces, volume groups) in the same driver session. This invalidates the usage of init-containers. Few advantages of merging the init-containers functionality to driver code: - smaller container image size - allows us to implement features that are common to both modes(ex: pmemPercentage) FIXES intel#532
The information shown in those setup stages (these run in LVM mode only) can be helpful for debugging issues and monitoring storage devices state.
The text was updated successfully, but these errors were encountered: