Skip to content

Releases: piraeusdatastore/piraeus-operator

v1.8.0

15 Mar 14:54
v1.8.0
Compare
Choose a tag to compare

This is the final release for the Operator 1.8.0 release. It includes some quality of life improvements and the usual update to components. There weren't any big changes since the release candidate, so I'll just repeat the noteworthy changes since 1.7.0:

  • Thanks to the contribution of @kvaps, setting up secure communication between Piraeus components is now easier than ever. Take a look at our upgraded security guide. If you configured SSL before, you need to follow the upgrade guide
  • The new CSI version 0.18.0 enables using S3 storage to store your VolumeSnapshots. This includes restoring from S3 in case of complete cluster failure. See the example snapshot class and example volume snapshot or wait for the inevitable blog post on linbit.com to learn more.

Upgrade notes:

For those using the k8s database: On upgrade you will be prompted to create a database backup. LINSTOR 1.17.0 still had a lot of issues with the backend, so having a backup that you can roll back to is important. Once ready to upgrade, add

--set IHaveBackedUpAllMyLinstorResources=true

to the upgrade command. This check will be removed in a future release: the Operator will perform the necessary backup and store it in a kubernetes secret.

Full changes:

Added

  • Allow setting the number of parallel requests created by the CSI sidecars. This limits the load on the LINSTOR
    backend, which could easily overload when creating many volumes at once.
  • Unify certificates format for SSL enabled installation, no more java tooling required.
  • Automatic certificates generation using Helm or Cert-manager
  • HA Controller and CSI components now wait for the LINSTOR API to be initialized using InitContainers.

Changed

  • Create backups of LINSTOR resource if the "k8s" database backend is used and an image change is detected. Backups
    are stored in Secret resources as a tar.gz. If the secret would get too big, the backup can be downloaded from
    the operator pod.
  • Default images:
    • LINSTOR 1.18.0
    • LINSTOR CSI 0.18.0
    • DRBD 9.1.6
    • DRBD Reactor 0.5.3
    • LINSTOR HA Controller 0.3.0
    • CSI Attacher v3.4.0
    • CSI Node Driver Registrar v2.4.0
    • CSI Provisioner v3.1.0
    • CSI Snapshotter v5.0.1
    • CSI Resizer v1.4.0
    • Stork v2.8.2
  • Stork updated to support Kubernetes v1.22+.
  • Satellites no longer have a readiness probe defined. This caused issues in the satellites by repeatedly opening
    unexpected connections, especially when using SSL.
  • Only query node devices if a storage pool needs to be created.
  • Use cached storage pool response to avoid causing excessive load on LINSTOR satellites.
  • Protect LINSTOR passphrase from accidental deletion by using a finalizer.

Breaking

  • If you have SSL configured, then the certificates must be regenerated in PEM format.
    Learn more in the upgrade guide.

v1.8.0-rc.1

24 Feb 17:09
v1.8.0-rc.1
Compare
Choose a tag to compare
v1.8.0-rc.1 Pre-release
Pre-release

This is the first release candidate for the upcoming Operator 1.8.0 release. It includes some quality of life improvements and the usual update to components.

A few things to note:

  • Thanks to the contribution of @kvaps, setting up secure communication between Piraeus components is now easier than ever. Take a look at our upgraded security guide. If you configured SSL before, you need to follow the upgrade guide
  • The new CSI version 0.18.0 enables using S3 storage to store your VolumeSnapshots. This includes restoring from S3 in case of complete cluster failure. See the example snapshot class and example volume snapshot or wait for the inevitable blog post on linbit.com to learn more.

Full changes:

Added

  • Allow setting the number of parallel requests created by the CSI sidecars. This limits the load on the LINSTOR
    backend, which could easily overload when creating many volumes at once.
  • Unify certificates format for SSL enabled installation, no more java tooling required.
  • Automatic certificates generation using Helm or Cert-manager

Changed

  • Create backups of LINSTOR resource if the "k8s" database backend is used and an image change is detected. Backups
    are stored in Secret resources as a tar.gz. If the secret would get too big, the backup can be downloaded from
    the operator pod.
  • Default images:
    • LINSTOR 1.18.0-rc.3
    • LINSTOR CSI 0.18.0
    • DRBD 9.1.6
    • DRBD Reactor 0.5.3
    • LINSTOR HA Controller 0.3.0
    • CSI Attacher v3.4.0
    • CSI Node Driver Registrar v2.4.0
    • CSI Provisioner v3.1.0
    • CSI Snapshotter v5.0.1
    • CSI Resizer v1.4.0
    • Stork v2.8.2
  • Stork updated to support Kubernetes v1.22+.
  • Satellites no longer have a readiness probe defined. This caused issues in the satellites by repeatedly opening
    unexpected connections, especially when using SSL.
  • Only query node devices if a storage pool needs to be created.
  • Use cached storage pool response to avoid causing excessive load on LINSTOR satellites.

Breaking

  • If you have SSL configured, then the certificates must be regenerated in PEM format.
    Learn more in the upgrade guide.

v1.7.1

18 Jan 14:01
v1.7.1
Compare
Choose a tag to compare

This release is mainly focused on cleaning up some bugs and small features.

There are no additional upgrade steps necessary. If you are experimenting with the K8s backend, don't forget to create a backup of all internal resources.

Added

  • Allow the external-provisioner and external-snapshotter access to secrets. This is required to support StorageClass
    and SnapshotClass secrets.
  • Instruct external-provisioner to pass PVC name+namespace to the CSI driver, enabling optional support for PVC based
    names for LINSTOR volumes.
  • Allow setting the log level of LINSTOR components via CRs. Other components are left using their default log level.
    The new default log level is INFO (was DEBUG previously, which was often too verbose).
  • Override the kernel source directory used when compiling DRBD (defaults to /usr/src). See
    operator.satelliteSet.kernelModuleInjectionAdditionalSourceDirectory
  • etcd-chart: add option to set priorityClassName.

Fixed

  • Use correct secret name when setting up TLS for satellites
  • Correctly configure ServiceMonitor resource if TLS is enabled for LINSTOR Controller.

v1.7.0

14 Dec 11:00
v1.7.0
Compare
Choose a tag to compare

It's finally time to release 1.7.0. It's been a long wait, but now we finally feel that 1.7.0 is ready to release.

The most exciting feature is certainly the option to run Piraeus without an additional database for the LINSTOR Controller. LINSTOR 1.16.0 added experimental support for using the Kubernetes API directly to store its internal state. The current plan is to support both Etcd and Kubernetes API as datastore, with the eventual goal of removing Etcd support once we are happy with the stability of this new backend. Read more on this topic here.

Apart from that, the Operator now applies Kubernetes node labels to the LINSTOR node objects as auxiliary properties. What that means is that LINSTOR CSI can now make scheduling decisions based on existing node labels, like the commonly used topology.kubernetes.io/zone. To take full advantage of this, we enabled the topology feature for CSI by default, and also updated the CSI driver to properly respect both StorageClass parameters (replicasOnDifferent, etc.) as well as topology information. We now recommend using volumeBindingMode: WaitForFirstConsumer in all storage classes.

Another important change is the removal of the Stork scheduler. In the past it caused issues by improperly restarting pods, scheduling to unusable nodes and just plain not working on newer Kubernetes versions. With Kubernetes now supporting volumeBindingMode: WaitForFirstConsumer and the LINSTOR CSI version being better at scheduling volumes, we felt it was safe to disable Stork by default. You can still enable it in the chart if you wish.

This is also the first Piraeus Operator release that supports creating backups of your volumes and storing them in S3 or another LINSTOR cluster. Currently this is only available using the LINSTOR CLI, take a look at the linstor remote ... and linstor backup ... commands. In a future release, this should be more tightly integrated with the Kubernetes infrastructure. In order to securly store any access tokens for remote locations, LINSTOR needs to be configured with a master passphrase. If no passphrase is defined, the Helm chart will create one for you.


Added

  • pv-hostpath: automatically determine on which nodes PVs should be created if no override is given.
  • Automatically add labels on Kubernetes Nodes to LINSTOR satellites as Auxiliary Properties. This enables using
    Kubernetes labels for volume scheduling, for example using replicasOnSame: topology.kubernetes.io/zone.
  • Support LINSTORs k8s backend by adding the necessary RBAC resources and documentation.
  • Automatically create a LINSTOR passphrase when none is configured.
  • Automatic eviction and deletion of offline satellites if the Kubernetes node object was also deleted.

Changed

  • Default images:
    • quay.io/piraeusdatastore/piraeus-server:v1.17.0
    • quay.io/piraeusdatastore/piraeus-csi:v0.17.0
    • quay.io/piraeusdatastore/drbd9-bionic:v9.1.4
    • quay.io/piraeusdatastore/drbd-reactor:v0.4.4
  • Recreates or updates to the satellite pods are now applied at once, instead of waiting for a node to complete before
    moving to the next.
  • Enable CSI topology by default, allowing better volume scheduling with volumeBindingMode: WaitForFirstConsumer.
  • Disable STORK by default. Instead, we recommend using volumeBindingMode: WaitForFirstConsumer in storage classes.

v1.7.0-rc.3

09 Dec 15:55
v1.7.0-rc.3
Compare
Choose a tag to compare

This release candidates updates the LINSTOR version as well as the LINSTOR CSI image. The updated LINSTOR version fixes the issue in which the LINSTOR controller would get stuck with an "unauthorized" message when reconnecting to a satellite.

For more details on changes since 1.6.0, check out the v1.7.0-rc.1 release


Changes since 1.7.0-rc.2

Changed

  • default images:
    • quay.io/piraeusdatastore/piraeus-server:v1.17.0
    • quay.io/piraeusdatastore/piraeus-csi:v0.17.0
    • quay.io/piraeusdatastore/drbd9-bionic:v9.1.4
    • quay.io/piraeusdatastore/drbd-reactor:v0.4.4

Changes since 1.7.0-rc.1

Changed

  • recreates or updates to the satellite pods are now applied at once, instead of waiting for a node to complete before
    moving to the next.

Fixed

  • Fixed a deadlock when reconciling satellites

Changes in 1.7.0-rc.1

Added

  • pv-hostpath: automatically determine on which nodes PVs should be created if no override is given.
  • Automatically add labels on Kubernetes Nodes to LINSTOR satellites as Auxiliary Properties. This enables using
    Kubernetes labels for volume scheduling, for example using replicasOnSame: topology.kubernetes.io/zone.
  • Support LINSTORs k8s backend by adding the necessary RBAC resources and documentation.
  • Automatically create a LINSTOR passphrase when none is configured.
  • Automatic eviction and deletion of offline satellites if the Kubernetes node object was also deleted.

Changed

  • Enable CSI topology by default, allowing better volume scheduling with volumeBindingMode: WaitForFirstConsumer.
  • Disable STORK by default. Instead, we recommend using volumeBindingMode: WaitForFirstConsumer in storage classes.

v1.7.0-rc.2

18 Nov 09:08
v1.7.0-rc.2
Compare
Choose a tag to compare

This release fixes a bug discovered in rc.1, which would deadlock the satellite reconciliation after the initial set of pods were available. This would mean that any changes to the LinstorSatelliteSet resource would not be picked up. This is now fixed.

A big thank you to @sribee for reporting this issue.

For more details on changes since 1.6.0, check out the v1.7.0-rc.1 release


Changes since 1.7.0-rc.1

Changed

  • recreates or updates to the satellite pods are now applied at once, instead of waiting for a node to complete before
    moving to the next.

Fixed

  • Fixed a deadlock when reconciling satellites

Changes in 1.7.0-rc.1

Added

  • pv-hostpath: automatically determine on which nodes PVs should be created if no override is given.
  • Automatically add labels on Kubernetes Nodes to LINSTOR satellites as Auxiliary Properties. This enables using
    Kubernetes labels for volume scheduling, for example using replicasOnSame: topology.kubernetes.io/zone.
  • Support LINSTORs k8s backend by adding the necessary RBAC resources and documentation.
  • Automatically create a LINSTOR passphrase when none is configured.
  • Automatic eviction and deletion of offline satellites if the Kubernetes node object was also deleted.

Changed

  • Enable CSI topology by default, allowing better volume scheduling with volumeBindingMode: WaitForFirstConsumer.
  • Disable STORK by default. Instead, we recommend using volumeBindingMode: WaitForFirstConsumer in storage classes.

v1.7.0-rc.1

16 Nov 13:11
v1.7.0-rc.1
Compare
Choose a tag to compare

This is the first release candidate for the upcoming 1.7.0 release of the Piraeus Operator. Please help by testing it!

It's been quite some time since the last release, and a lot of new features and improvements were made since then.

The most exciting feature is certainly the option to run Piraeus without an additional database for the LINSTOR Controller. LINSTOR 1.16.0 added experimental support for using the Kubernetes API directly to store its internal state. The current plan is to support both Etcd and Kubernetes API as datastore, with the eventual goal of removing Etcd support once we are happy with the stability of this new backend. Read more on this topic here.

Apart from that, the Operator now applies Kubernetes node labels to the LINSTOR node objects as auxiliary properties. What that means is that LINSTOR CSI can now make scheduling decisions based on existing node labels, like the commonly used topology.kubernetes.io/zone. To take full advantage of this, we enabled the topology feature for CSI by default, and also updated the CSI driver to properly respect both StorageClass parameters (replicasOnDifferent, etc.) as well as topology information. We now recommend using volumeBindingMode: WaitForFirstConsumer in all storage classes.

Another important change is the removal of the Stork scheduler. In the past it caused issues by improperly restarting pods, scheduling to unusable nodes and just plain not working on newer Kubernetes versions. With Kubernetes now supporting volumeBindingMode: WaitForFirstConsumer and the LINSTOR CSI version being better at scheduling volumes, we felt it was safe to disable Stork by default. You can still enable it in the chart if you wish.

This is also the first Piraeus Operator release that supports creating backups of your volumes and storing them in S3 or another LINSTOR cluster. Currently this is only available using the LINSTOR CLI, take a look at the linstor remote ... and linstor backup ... commands. In a future release, this should be more tightly integrated with the Kubernetes infrastructure. In order to securly store any access tokens for remote locations, LINSTOR needs to be configured with a master passphrase. If no passphrase is defined, the Helm chart will create one for you.


Known issues

  • A bug in LINSTOR 1.16.0 when setting a master passphrase means that a restarted controller gets stuck with a node not authorized error. As a workaround, restart the piraeus-op-ns-node daemonset: kubectl rollout restart daemonset/piraeus-op-ns-node.

All Changes

Added

  • pv-hostpath: automatically determine on which nodes PVs should be created if no override is given.
  • Automatically add labels on Kubernetes Nodes to LINSTOR satellites as Auxiliary Properties. This enables using
    Kubernetes labels for volume scheduling, for example using replicasOnSame: topology.kubernetes.io/zone.
  • Support LINSTORs k8s backend by adding the necessary RBAC resources and documentation.
  • Automatically create a LINSTOR passphrase when none is configured.
  • Automatic eviction and deletion of offline satellites if the Kubernetes node object was also deleted.

Changed

  • Enable CSI topology by default, allowing better volume scheduling with volumeBindingMode: WaitForFirstConsumer.
  • Disable STORK by default. Instead, we recommend using volumeBindingMode: WaitForFirstConsumer in storage classes.

v1.6.0

02 Sep 12:34
v1.6.0
Compare
Choose a tag to compare

This release brings some new and exciting features:

  • All images should now also be available for arm64 in addition to amd64.
  • Piraeus is now compatible with Kubernetes v1.22. Note that this does not extend to some external components (notably: STORK), which require additional updates. For now we recommend disabling STORK on v1.22.
  • You can enable CSI storage capacity tracking.
  • We improved support for deploying the CSI Snapshot Controller. As such, it was moved into a separate chart.

Detailed instructions on how to upgrade can be found in the upgrade guide

IMPORTANT

  • Piraeus Operator now requires Kubernetes 1.19+

Added

  • Allow CSI to work with distributions that use a kubelet working directory other than /var/lib/kubelet. See
    the csi.kubeletPath option.
  • Enable Storage Capacity Tacking. This enables Kubernetes to base Pod scheduling decisions on remaining storage
    capacity. The feature is in beta and enabled by default starting with Kubernetes 1.21.

Changed

  • Disable Stork Health Monitoring by default. Stork cannot distinguish between control plane and data plane issues,
    which can lead to instances where Stork will migrate a volume that is still mounted on another node, making the
    volume effectively unusable.

  • Updated operator to kubernetes v1.21 components.

  • Default images:

    • quay.io/piraeusdatastore/piraeus-server:v1.14.0
    • quay.io/piraeusdatastore/drbd9-bionic:v9.0.30
    • quay.io/piraeusdatastore/drbd-reactor:v0.4.3
    • quay.io/piraeusdatastore/piraeus-ha-controller:v0.2.0
    • external CSI images

Removed

  • The cluster-wide snapshot controller is no longer deployed as a dependency of the piraeus-operator chart.
    Instead, separate charts are available on artifacthub.io
    that deploy the snapshot controller and extra validation for snapshot resources.

    The subchart was removed, as it unnecessarily tied updates of the snapshot controller to piraeus and vice versa. With
    the tightened validation starting with snapshot CRDs v1, moving the snapshot controller to a proper chart seems
    like a good solution.

v1.5.1

21 Jun 11:44
v1.5.1
Compare
Choose a tag to compare

This release only contains updated default images, updating the LINSTOR components to v1.13.0

  • Piraeus Server v1.13.0
  • Piraeus CSI v0.13.1
  • CSI Provisioner v2.1.2

v1.5.0

12 May 14:55
v1.5.0
Compare
Choose a tag to compare

This release brings 2 big changes:

Monitoring

The most exciting part of this release is the introduction of monitoring via prometheus. If you are upgrading from 1.4 or older, please take a look at the upgrade guide to enable monitoring.

Updated resource labels

In an effort to better organize all resources created by the operator, we switched to using the recommended labels for all workloads. Since changing the label on Deployments and DaemonSets is not supported, the operator will do a remove+create where necessary. For most users this should not require any manual action.


The changes in detail:

Added

  • All operator-managed workloads apply recommended labels. This requires the recreation of Deployments and DaemonSets
    on upgrade. This is automatically handled by the operator, however any customizations applied to the deployments
    not managed by the operator will be reverted in the process.
  • Use drbd-reactor to expose Prometheus endpoints on each satellite.
  • Configure ServiceMonitor resources if they are supported by the cluster (i.e. prometheus operator is configured)

Changed

  • CSI Nodes no longer use hostNetwork: true. The pods already got the correct hostname via the downwardAPI and do not
    talk to DRBD's netlink interface directly.
  • External: CSI snapshotter subchart now packages v1 CRDs. Fixes deprecation warnings when installing
    the snapshot controller.
  • Default images:
    • Piraeus Server v1.12.3
    • Piraeus CSI v0.13.0
    • DRBD v9.0.29

Changes since 1.4.0