Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StatefulSet's volumeClaimTemplates is displayed as PersistentVolumeClaim which status is OutOfSync #1729

Closed
seroron opened this issue Jun 11, 2019 · 5 comments
Labels
workaround There's a workaround, might not be great, but exists

Comments

@seroron
Copy link

seroron commented Jun 11, 2019

Describe the bug

When 'app name' and 'app.kubernetes.io/instance' are the same, StatefulSet's volumeClaimTemplates is displayed as PersistentVolumeClaim which status is OutOfSync.

To Reproduce

  • yaml
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/instance: testapp
    app.kubernetes.io/name: mongo
    app.kubernetes.io/part-of: testapp-api
  name: testapp-api-mongo
  namespace: sandbox
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: testapp
      app.kubernetes.io/name: mongo
      app.kubernetes.io/part-of: testapp-api
  serviceName: mongo
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: testapp
        app.kubernetes.io/name: mongo
        app.kubernetes.io/part-of: testapp-api
      name: mongo
    spec:
      containers:
      - image: mongo:4.0.5
        imagePullPolicy: Always
        name: mongo
        ports:
        - containerPort: 27017
          name: service
        volumeMounts:
        - mountPath: /data/db
          name: mongo-data-volume
  volumeClaimTemplates:
  - metadata:
      labels:
        app.kubernetes.io/instance: testapp
        app.kubernetes.io/part-of: testapp-api
      name: mongo-data-volume
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi

  • NG ('app name' and 'app.kubernetes.io/instance' are same)
argocd app create testapp \
  --repo <omission> \
  --path "." \
  --dest-server <omission> \
  --dest-namespace sandbox \
  --project sandbox
# argocd app get testapp
Name:               testapp
Project:            sandbox
Server:             <omission>
Namespace:          sandbox
URL:                <omission>
Repo:               <omission>
Target:
Path:               .
Sync Policy:        <none>
Sync Status:        OutOfSync from  (e04dd96)
Health Status:      Healthy

GROUP  KIND                   NAMESPACE    NAME                                   STATUS     HEALTH
apps   StatefulSet            sandbox      testapp-api-mongo                      Synced     Healthy
       PersistentVolumeClaim  sandbox      mongo-data-volume-testapp-api-mongo-0  OutOfSync  Healthy
  • OK ('app name' and 'app.kubernetes.io/instance' are not same)
argocd app create testapp2 \
  --repo <omission> \
  --path "." \
  --dest-server <omission> \
  --dest-namespace sandbox \
  --project sandbox
# argocd app get testapp2
Name:               testapp2
Project:            sandbox
Server:             <omission>
Namespace:          sandbox
URL:                <omission>
Repo:               <omission>
Target:
Path:               .
Sync Policy:        <none>
Sync Status:        Synced to  (e04dd96)
Health Status:      Healthy

GROUP  KIND         NAMESPACE    NAME               STATUS  HEALTH
apps   StatefulSet  sandbox      testapp-api-mongo  Synced  Healthy

Expected behavior

StatefulSet's volumeClaimTemplates is not displayed as resource.

Version

v1.0.1

@seroron seroron added the bug Something isn't working label Jun 11, 2019
@alexec
Copy link
Contributor

alexec commented Jun 11, 2019

I'm not clear. Have you see the app name in the YAML? Argo CD uses app name to determine which resources the app consists of. You must not set it in your manifests.

@seroron
Copy link
Author

seroron commented Jun 11, 2019

Argo CD rewrites 'app.kubernetes.io/instance' to 'app name' at the time of deployment.
As a result, if I execute kubectl diff -f mongo.yml, difference will be output.
In order to make the result of kubectl diff without difference, I would like to make it the same as app.kubernetes.io/instance of yaml and app name.
I understand that GitOps should not use kubectl apply, and I will use Argo CD for all deployments.
However, having a difference is problematic in terms of operations.

@alexmt
Copy link
Collaborator

alexmt commented Jun 11, 2019

There is a discussion about it in #1482. I think we will need to change label to custom label or even switch to annotation

@seroron
Copy link
Author

seroron commented Jun 14, 2019

I decided to use application.instanceLabelKey: argocd/appname to solve this problem.
Although the difference by kubectl diff is displayed, the difference in app.kubernetes.io/instance is no longer present.
So, operation complexity has been reduced somewhat.

@alexec alexec modified the milestones: v1.1, v1.2 Jun 14, 2019
@alexec alexec added wontfix This will not be worked on workaround There's a workaround, might not be great, but exists labels Jul 23, 2019
@alexec alexec removed this from the v1.2 milestone Jul 23, 2019
@stale stale bot removed the wontfix This will not be worked on label Jul 23, 2019
@jessesuen
Copy link
Member

I understand that GitOps should not use kubectl apply, and I will use Argo CD for all deployments.
However, having a difference is problematic in terms of operations.

Actually Argo CD uses kubectl apply under the covers, and we support the mode where people switch back and forth between kubectl apply.

Theres three solutions to this problem:

  1. don't allow kustomize's comonLabels feature to inject app.kubernetes.io/instance into spec.template.metadata.labels.
  2. use a different application.instanceLabelKey
  3. inject argocd.argoproj.io/compare-options: IgnoreExtraneous into spec.template.metadata.labels.

Don't think we need to do anything for this.

@jessesuen jessesuen removed the bug Something isn't working label Aug 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
workaround There's a workaround, might not be great, but exists
Projects
None yet
Development

No branches or pull requests

4 participants