You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What did you do to encounter the bug?
Steps to reproduce the behavior:
Setting replicas of the mongodb member and arbiter to 0.
What did you expect?
The mongodb operator accepts this configuration of the mongodb. Not having any pods for the mongodb should be a valid configuration for example to scale down the cluster at night or on the weekends.
What happened instead?
The new spec is not valid. Therefore we get the following error
2024-10-31T10:22:14.764Z ERROR controllers/mongodb_status_options.go:104 error validating new Spec: number of arbiters specified (0) is greater or equal than the number of members in the replicaset (0). At least one member must not be an arbiter
github.com/mongodb/mongodb-kubernetes-operator/controllers.messageOption.ApplyOption
/workspace/controllers/mongodb_status_options.go:104
github.com/mongodb/mongodb-kubernetes-operator/pkg/util/status.Update
/workspace/pkg/util/status/status.go:25
github.com/mongodb/mongodb-kubernetes-operator/controllers.ReplicaSetReconciler.Reconcile
/workspace/controllers/replica_set_controller.go:135
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:122
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:323
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:274
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235
This issue is being marked stale because it has been open for 60 days with no activity. Please comment if this issue is still affecting you. If there is no change, this issue will be closed in 30 days.
What did you do to encounter the bug?
Steps to reproduce the behavior:
Setting replicas of the mongodb member and arbiter to 0.
What did you expect?
The mongodb operator accepts this configuration of the mongodb. Not having any pods for the mongodb should be a valid configuration for example to scale down the cluster at night or on the weekends.
What happened instead?
The new spec is not valid. Therefore we get the following error
Operator Information
Kubernetes Cluster Information
Additional context
Add any other context about the problem here.
Operator logs:
The text was updated successfully, but these errors were encountered: