Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tenant fail to add pool if is added to the begginging of the array #2160

Closed
pjuarezd opened this issue Jun 12, 2024 · 0 comments · Fixed by #2161
Closed

Tenant fail to add pool if is added to the begginging of the array #2160

pjuarezd opened this issue Jun 12, 2024 · 0 comments · Fixed by #2161

Comments

@pjuarezd
Copy link
Member

pjuarezd commented Jun 12, 2024

Adding a new Pool to the tenant at the beggining of the spec.pools array causes a error loop in Operator, Operator never recovers from it.

Operator pod shows the error message:

E0612 04:45:59.139909       1 main-controller.go:1298] [Will try again in 5sec] Update tenant tenant-1 statefulset tenant-1-pool-1 error StatefulSet.apps "tenant-1-pool-1" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"v1.min.io/console":"tenant-1-console", "v1.min.io/pool":"pool-5", "v1.min.io/tenant":"tenant-1"}: `selector` does not match template `labels`

Expected Behavior

Adding pools to the tenant resource should be idempotent, regardless of the order of the pool items in the list, as long as the pool name (spec.pools.*.name) is unique across pools.

Possible Solution

Current implementation tries to determine the pool name from by iterating spec.pools then based on the index gets the existing pool name from spec.satus.pools, this will fail becaus the existing spec.status.pools do not have the new pool name and will return the first previously existed pool name, wrongly trying to modify the existing Statefulset instead of add a new Statefulset.

for i, pool := range tenant.Spec.Pools {
// Get the StatefulSet with the name specified in Tenant.status.pools[i].SSName
// if this index is in the status of pools use it, else capture the desired name in the status and store it
var ssName string
if len(tenant.Status.Pools) > i {
ssName = tenant.Status.Pools[i].SSName
} else {
ssName = tenant.PoolStatefulsetName(&pool)
tenant.Status.Pools = append(tenant.Status.Pools, miniov2.PoolStatus{
SSName: ssName,
State: miniov2.PoolNotCreated,
})
// push updates to status
if tenant, err = c.updatePoolStatus(ctx, tenant); err != nil {
return WrapResult(Result{}, err)
}

Steps to Reproduce (for bugs)

  1. Create a tenant with 1 pool
Example tenant (click to expand)
apiVersion: v1
data:
  config.env: ZXhwb3J0IE1JTklPX0JST1dTRVI9Im9uIgpleHBvcnQgTUlOSU9fUk9PVF9VU0VSPSJtaW5pbyIKZXhwb3J0IE1JTklPX1JPT1RfUEFTU1dPUkQ9Im1pbmlvMTIzIgpleHBvcnQgTUlOSU9fU1RPUkFHRV9DTEFTU19TVEFOREFSRD0iRUM6NCIK
kind: Secret
metadata:
  labels:
    v1.min.io/tenant: tenant-1
  name: tenant-1-env-configuration
  namespace: tenant-1
type: Opaque
---
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
  name: tenant-1
  namespace: tenant-1
scheduler:
  name: ""
spec:
  configuration:
    name: tenant-1-env-configuration
  exposeServices:
    console: true
    minio: true
  features:
    enableSFTP: false
  image: minio/minio:RELEASE.2024-06-11T03-13-30Z
  imagePullSecret: {}
  mountPath: /export
  pools:
  - containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
      seccompProfile:
        type: RuntimeDefault
    name: pool-1
    resources:
      requests:
        cpu: "2"
        memory: 2Gi
    runtimeClassName: ""
    securityContext:
      fsGroup: 1000
      fsGroupChangePolicy: OnRootMismatch
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
    servers: 4
    volumeClaimTemplate:
      metadata:
        creationTimestamp: null
        name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "68719476736"
        storageClassName: standard
      status: {}
    volumesPerServer: 4
  requestAutoCert: true
  users:
  - name: tenant-1-user-0
  1. Add a second pool, set a different name and make sure to put the pool as first at the list, before the existing pool-1
Example tenant (click to expand)
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
  name: tenant-1
  namespace: tenant-1
scheduler:
  name: ""
spec:
  configuration:
    name: tenant-1-env-configuration
  exposeServices:
    console: true
    minio: true
  features:
    enableSFTP: false
  image: minio/minio:RELEASE.2024-06-11T03-13-30Z
  imagePullSecret: {}
  mountPath: /export
  pools:
  - containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
      seccompProfile:
        type: RuntimeDefault
    name: pool-1
    resources:
      requests:
        cpu: "2"
        memory: 2Gi
    runtimeClassName: ""
    securityContext:
      fsGroup: 1000
      fsGroupChangePolicy: OnRootMismatch
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
    servers: 4
    volumeClaimTemplate:
      metadata:
        creationTimestamp: null
        name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "68719476736"
        storageClassName: standard
      status: {}
    volumesPerServer: 4
  - containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
      seccompProfile:
        type: RuntimeDefault
    name: pool-1
    resources:
      requests:
        cpu: "2"
        memory: 2Gi
    runtimeClassName: ""
    securityContext:
      fsGroup: 1000
      fsGroupChangePolicy: OnRootMismatch
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
    servers: 4
    volumeClaimTemplate:
      metadata:
        creationTimestamp: null
        name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "68719476736"
        storageClassName: standard
      status: {}
    volumesPerServer: 4
  requestAutoCert: true
  users:
  - name: tenant-1-user-0
  1. Check on Operator pod logs and notice the following error is shown over and over
E0612 05:00:22.795548       1 main-controller.go:1298] [Will try again in 5sec] Update tenant tenant-1 statefulset tenant-1-pool-1 error StatefulSet.apps "tenant-1-pool-1" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"v1.min.io/console":"tenant-1-console", "v1.min.io/pool":"pool-0", "v1.min.io/tenant":"tenant-1"}: `selector` does not match template `labels`
  1. Notice that the statefulset corresponding to the recently pool is not added
  2. Notice mc admin info do not show the added Pool, only the pool that existed before

Context

Regression

Your Environment

  • Version used (minio-operator): v5.0.15, also in master (tested up to commit a016247 Jun 11 2024)
  • Environment name and version (e.g. kubernetes v1.17.2): Kind/k8s v0.27.4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants