Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KSOPS Issue With Strategic Merge Patch In ArgoCD #134

Closed
evercast-chris opened this issue Sep 13, 2021 · 39 comments
Closed

KSOPS Issue With Strategic Merge Patch In ArgoCD #134

evercast-chris opened this issue Sep 13, 2021 · 39 comments

Comments

@evercast-chris
Copy link

Hello, I have been trying to connect KSOPS with ArgoCD for a while now. I seem to not be having luck with the kustomize.buildOptions: "--enable-alpha-plugins" command in the argo-cd configmap via the strategic merge patch with init containers. KSOPS works fine locally and the patch on the argo-cd-configmap is done correctly as well but still nothing.

I recently also tried using the ksops 2.5 image version and changing the flag to --enable_alpha_plugins. image: viaductoss/ksops:v3.0.1 to --> image: viaductoss/ksops:v2.5.0 inside the repo sever patch. Still it does not sync with ArgoCD.

Here are the error am receiving via ArgoCD. --> Unable to create application: application spec is invalid: InvalidSpecError: Unable to generate manifests in base rpc error: code = Unknown desc = 'kustomize build' (my_git_repo) --enable_alpha_plugins' failed exit status 1: Error: unknown flag: --enable-alpha-plugins

Another error from ArgoCD --> unable to find root -tried ("; homed in $KUSTOMIZE_PLUGIN_HOME), ("kustomize/plugin'; homed in $XDG_CONFIG_HOME), ('/home/argocd/.config/kustomize/plugin'; homed in default value of $XDG_CONFIG_HOME)

chrisquiles@Christophers-MacBook-Pro yamls % kubectl patch \
  -n argocd deployment/argocd-repo-server \
  -p "$(cat argo-cd-repo-server-ksops-patch.yaml)"
deployment.apps/argocd-repo-server patched

That is the patch command I am using...

Please let me know what I can do to finally get KSOPS working with Argo.
Thank you!

@evercast-chris
Copy link
Author

@devstein

@devstein
Copy link
Collaborator

Hi @evercast-chris thanks for making an issue. Can you paste the output yaml for the deployment.apps/argocd-repo-server after it's patched? This way I can give more specific advice

@evercast-chris
Copy link
Author

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "7"
  creationTimestamp: "2021-08-16T18:44:42Z"
  generation: 7
  labels:
    app.kubernetes.io/component: repo-server
    app.kubernetes.io/name: argocd-repo-server
    app.kubernetes.io/part-of: argocd
  name: argocd-repo-server
  namespace: argocd
  resourceVersion: "1731134"
  uid: 3ef62e81-1b3a-4e7b-81f3-f0da5f8a02da
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/name: argocd-repo-server
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/name: argocd-repo-server
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/name: argocd-repo-server
              topologyKey: kubernetes.io/hostname
            weight: 100
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/part-of: argocd
              topologyKey: kubernetes.io/hostname
            weight: 5
      automountServiceAccountToken: false
      containers:
      - command:
        - uid_entrypoint.sh
        - argocd-repo-server
        - --redis
        - argocd-redis:6379
        env:
        - name: KUSTOMIZE_PLUGIN_HOME
          value: /.config/kustomize/plugin
        image: quay.io/argoproj/argocd:v2.0.5
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz?full=true
            port: 8084
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1
        name: argocd-repo-server
        ports:
        - containerPort: 8081
          protocol: TCP
        - containerPort: 8084
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8084
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - all
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /usr/local/bin/kustomize
          name: custom-tools
          subPath: kustomize
        - mountPath: /.config/kustomize/plugin/viaduct.ai/v1/ksops/ksops.so
          name: custom-tools
          subPath: ksops.so
        - mountPath: /app/config/ssh
          name: ssh-known-hosts
        - mountPath: /app/config/tls
          name: tls-certs
        - mountPath: /app/config/gpg/source
          name: gpg-keys
        - mountPath: /app/config/gpg/keys
          name: gpg-keyring
        - mountPath: /app/config/reposerver/tls
          name: argocd-repo-server-tls
      dnsPolicy: ClusterFirst
      initContainers:
      - args:
        - echo "Installing KSOPS..."; export PKG_NAME=ksops; mv ${PKG_NAME}.so /custom-tools/;
          mv $GOPATH/bin/kustomize /custom-tools/; echo "Done.";
        command:
        - /bin/sh
        - -c
        image: viaductoss/ksops:v2.5.0
        imagePullPolicy: IfNotPresent
        name: install-ksops
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /custom-tools
          name: custom-tools
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: gke-argocd-demo
      serviceAccountName: gke-argocd-demo
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: custom-tools
      - configMap:
          defaultMode: 420
          name: argocd-ssh-known-hosts-cm
        name: ssh-known-hosts
      - configMap:
          defaultMode: 420
          name: argocd-tls-certs-cm
        name: tls-certs
      - configMap:
          defaultMode: 420
          name: argocd-gpg-keys-cm
        name: gpg-keys
      - emptyDir: {}
        name: gpg-keyring
      - name: argocd-repo-server-tls
        secret:
          defaultMode: 420
          items:
          - key: tls.crt
            path: tls.crt
          - key: tls.key
            path: tls.key
          - key: ca.crt
            path: ca.crt
          optional: true
          secretName: argocd-repo-server-tls
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2021-09-13T03:16:07Z"
    lastUpdateTime: "2021-09-13T03:16:07Z"
    message: 'pods "argocd-repo-server-dc9f77c74-" is forbidden: error looking up
      service account argocd/gke-argocd-demo: serviceaccount "gke-argocd-demo" not
      found'
    reason: FailedCreate
    status: "True"
    type: ReplicaFailure
  - lastTransitionTime: "2021-09-13T23:32:18Z"
    lastUpdateTime: "2021-09-13T23:32:18Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2021-09-14T05:10:47Z"
    lastUpdateTime: "2021-09-14T05:10:47Z"
    message: ReplicaSet "argocd-repo-server-dc9f77c74" has timed out progressing.
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Progressing
  observedGeneration: 7
  readyReplicas: 1
  replicas: 1
  unavailableReplicas: 1

@evercast-chris
Copy link
Author

sorry its a long manifest, but thats what I have after patching to repo-server @devstein

@devstein
Copy link
Collaborator

@evercast-chris The patch is missing

          # 4. Set the XDG_CONFIG_HOME env variable to allow kustomize to detect the plugin
          env:
            - name: XDG_CONFIG_HOME
              value: /.config

Let me know if you run into any more errors. Just comment the error here.

@evercast-chris
Copy link
Author

@devstein I think we're on the right track. I added the env variable but am still getting an error from argocd when creating an application. Here is the information below.

Here's what the ArgoCD error looks like:

Unable to create application: application spec is invalid: InvalidSpecError: Unable to get app details: rpc error: code = Unknown desc = kustomize build /tmp/[email protected]_evercast_evercast-argocd/ksops/ksops-demo --enable_alpha_plugins failed exit status 1: 2021/09/16 03:46:04 unable to find plugin root - tried: (''; homed in $KUSTOMIZE_PLUGIN_HOME), ('kustomize/plugin'; homed in $XDG_CONFIG_HOME), ('/home/argocd/.config/kustomize/plugin'; homed in default value of $XDG_CONFIG_HOME), ('/home/argocd/kustomize/plugin'; homed in home directory)

Also, here is what it looks like looks like in the argo-repo-server

        - name: KUSTOMIZE_PLUGIN_HOME
          value: /.config/kustomize/plugin
        - name: XDG_CONFIG_HOME
          value: /.config
        image: quay.io/argoproj/argocd:v2.0.5
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz?full=true
            port: 8084
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1

Here is what the env variable values are:


chrisquiles@Christophers-MacBook-Pro ~/d/g/e/k/ksops-demo (feat/DEVO-1271/argocd)> echo $XDG_CONFIG_HOME
$HOME/.config
chrisquiles@Christophers-MacBook-Pro ~/d/g/e/k/ksops-demo (feat/DEVO-1271/argocd)> echo $KUSTOMIZE_PLUGIN_HOME
~/.config/kustomize/plugin

@devstein
Copy link
Collaborator

@evercast-chris Based on those environmnet variables you shared, they aren't getting set properly in Argo CD. XDG_CONFIG_HOME should be set to /.config not ~/.config

argocd@argocd-repo-server-86f84b7775-6kr58:/$ echo $XDG_CONFIG_HOME
/.config

I recommend re-reviewing the repo server patch in the README.

@evercast-chris
Copy link
Author

@devstein I changed the env variable but still getting the same result. I would like to try to re-patch but im struggling to find a command that actually successfully patches this to the deployment file. The closest command I can find is the follow --> ```

chrisquiles@Christophers-MacBook-Pro yamls % kubectl patch \
  -n argocd deployment/argocd-repo-server \
  -p "$(cat argo-cd-repo-server-ksops-patch.yaml)"
deployment.apps/argocd-repo-server patched (no change)

@evercast-chris
Copy link
Author

@devstein as you can see the patch has no changes but its a different patch i am trying why am I getting no changes? This has happend often in the process. The patch I am currently trying again to patch is the following...

# argo-cd-repo-server-ksops-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: argocd-repo-server
  spec:
  template:
    spec:
      # 1. Define an emptyDir volume which will hold the custom binaries
      volumes:
        - name: custom-tools
          emptyDir: {}
      # 2. Use an init container to download/copy custom binaries into the emptyDir
      initContainers:
        - name: install-ksops
          image: viaductoss/ksops:v3.0.1
          command: ["/bin/sh", "-c"]
          args:
            - echo "Installing KSOPS...";
              export PKG_NAME=ksops;
              mv ${PKG_NAME}.so /custom-tools/;
              mv $GOPATH/bin/kustomize /custom-tools/;
              echo "Done.";
          volumeMounts:
            - mountPath: /custom-tools
              name: custom-tools
      # 3. Volume mount the custom binary to the bin directory (overriding the existing version)
      containers:
        - name: argocd-repo-server
          volumeMounts:
            - mountPath: /usr/local/bin/kustomize
              name: custom-tools
              subPath: kustomize
              # Verify this matches a XDG_CONFIG_HOME=/.config env variable
            - mountPath: /.config/kustomize/plugin/viaduct.ai/v1/ksops/ksops.so
              name: custom-tools
              subPath: ksops.so
          # 4. Set the XDG_CONFIG_HOME env variable to allow kustomize to detect the plugin
          env:
            - name: XDG_CONFIG_HOME
              value: /.config
        ## If you use AWS or GCP KMS, don't forget to include the necessary credentials to decrypt the secrets!
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  name: argo
                  key: aws_access_key
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: argo
                  key: aws_access_secret_key

@devstein
Copy link
Collaborator

@evercast-chris The patch is incompatible with KSOPS v3. If you look again at the README it should look like

# argo-cd-repo-server-ksops-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: argocd-repo-server
  spec:
  template:
    spec:
      # 1. Define an emptyDir volume which will hold the custom binaries
      volumes:
        - name: custom-tools
          emptyDir: {}
      # 2. Use an init container to download/copy custom binaries into the emptyDir
      initContainers:
        - name: install-ksops
          image: viaductoss/ksops:v3.0.1
          command: ["/bin/sh", "-c"]
          args:
            - echo "Installing KSOPS...";
              mv ksops /custom-tools/;
              mv $GOPATH/bin/kustomize /custom-tools/;
              echo "Done.";
          volumeMounts:
            - mountPath: /custom-tools
              name: custom-tools
      # 3. Volume mount the custom binary to the bin directory (overriding the existing version)
      containers:
        - name: argocd-repo-server
          volumeMounts:
            - mountPath: /usr/local/bin/kustomize
              name: custom-tools
              subPath: kustomize
              # Verify this matches a XDG_CONFIG_HOME=/.config env variable
            - mountPath: /.config/kustomize/plugin/viaduct.ai/v1/ksops/ksops
              name: custom-tools
              subPath: ksops
          # 4. Set the XDG_CONFIG_HOME env variable to allow kustomize to detect the plugin
          env:
            - name: XDG_CONFIG_HOME
              value: /.config
        ## If you use AWS or GCP KMS, don't forget to include the necessary credentials to decrypt the secrets!
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  name: argo
                  key: aws_access_key
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: argo
                  key: aws_access_secret_key

Notice the difference in step #2 and step #3 (shouldn't have a .so suffix on the executable)

@evercast-chris
Copy link
Author

Thank you for taking a look @devstein im going to try this on my primary argocd environment so I have a clean deployment to work with. I want to confirm if I should be using the command kustomize.buildOptions: "--enable-alpha-plugins" in my configmap. Also, I am patching argo-cd-repo-server-ksops-patch.yaml from my "yamls" folder on my desktop, would the following command still be an efficient one to execute the patch? ---->

chrisquiles@Christophers-MacBook-Pro yamls % kubectl patch \                      
  -n argocd deployment/argocd-repo-server \
  -p "$(cat argo-cd-repo-server-ksops-patch.yaml)"

@devstein
Copy link
Collaborator

I want to confirm if I should be using the command kustomize.buildOptions: "--enable-alpha-plugins" in my configmap.

Yes

Also, I am patching argo-cd-repo-server-ksops-patch.yaml from my "yamls" folder on my desktop, would the following command still be an efficient one to execute the patch

I think it should work, but I always use kustomize to build and patch the manifests and then kubectl apply the generated output directly

@evercast-chris
Copy link
Author

@devstein Ok cool I think when it comes to kustomize to patch this file, I don't have the ArgoCD manifests in a repo anywhere its just in the cluster under the ArgoCD namespace. Is the kustomize command for patching using the strategicmergepatch in a kustomization.yaml or is there actually an actual command for patch, such as there is for the build command?

@devstein
Copy link
Collaborator

devstein commented Sep 17, 2021

Got it, if this is a one-off operation and you want to sanity check the results with kustomize you could

  1. Use kubectl to get the current Argo CD manifest and export it locally
kubectl  -n argocd get deployment/argocd-repo-server -o yaml > base.yaml
  1. Create a kustomization.yaml file
resources:
- ./base.yaml

patches:
- ./argo-cd-repo-server-ksops-patch.yaml
  1. Run kustomize build and kubectl apply
kustomize build --enable-alpha-plugins ./ | kubectl apply -f - 

@evercast-chris
Copy link
Author

Hey @devstein thanks for the explanation its very helpful. Im getting an error trying to clarify what the patches part of the this kustomization.yaml file is doing here. It looks like im putting the output for argo-repo-server into base.yaml locally. So, im trying to confirm that I am patching the new patch and not the same patch over base.yaml, should I be patching the argo-cd-repo-server-ksops-patch.yaml instead of

patches:
- ./deployment/argocd-repo-server

is that correct?

@devstein
Copy link
Collaborator

@evercast-chris Right!

@evercast-chris
Copy link
Author

evercast-chris commented Sep 20, 2021

@devstein Awesome it built, I just needed to tweak the kustomization.yaml a bit with

 target:
      labelSelector: "app.kubernetes.io/name=nginx"

However, it is not allowing me to create the file. Im getting the following error:
error: error validating "STDIN": error validating data: ValidationError(Deployment.metadata): unknown field "template" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false

I tried using the validate=false command as suggested but also received this message:

Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "argocd-repo-server", Namespace: "argocd"
for: "STDIN": Operation cannot be fulfilled on deployments.apps "argocd-repo-server": the object has been modified; please apply your changes to the latest version and try again

Researching, this might seem like a syntax error somewhere but not sure. What would you recommend at this point?

@devstein
Copy link
Collaborator

Hey @evercast-chris you are right this is a syntax error in the base.yaml. I can help if you paste the contents here, but understand if there are any sensitive fields.

@evercast-chris
Copy link
Author

@devstein ok the the base.yaml checks out as valid yaml. The only error I have seen with it is the resources here is the section of the yaml.

        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - all

It states in VS Code no resource limits specified for this container - this could starve other process.

@evercast-chris
Copy link
Author

@devstein here is the base.yaml file I am using...

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"repo-server","app.kubernetes.io/name":"argocd-repo-server","app.kubernetes.io/part-of":"argocd"},"name":"argocd-repo-server","namespace":"argocd"},"spec":{"selector":{"matchLabels":{"app.kubernetes.io/name":"argocd-repo-server"}},"template":{"metadata":{"labels":{"app.kubernetes.io/name":"argocd-repo-server"}},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchLabels":{"app.kubernetes.io/name":"argocd-repo-server"}},"topologyKey":"kubernetes.io/hostname"},"weight":100},{"podAffinityTerm":{"labelSelector":{"matchLabels":{"app.kubernetes.io/part-of":"argocd"}},"topologyKey":"kubernetes.io/hostname"},"weight":5}]}},"automountServiceAccountToken":false,"containers":[{"command":["uid_entrypoint.sh","argocd-repo-server","--redis","argocd-redis:6379"],"image":"quay.io/argoproj/argocd:v2.0.4","imagePullPolicy":"Always","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz?full=true","port":8084},"initialDelaySeconds":30,"periodSeconds":5},"name":"argocd-repo-server","ports":[{"containerPort":8081},{"containerPort":8084}],"readinessProbe":{"httpGet":{"path":"/healthz","port":8084},"initialDelaySeconds":5,"periodSeconds":10},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["all"]}},"volumeMounts":[{"mountPath":"/app/config/ssh","name":"ssh-known-hosts"},{"mountPath":"/app/config/tls","name":"tls-certs"},{"mountPath":"/app/config/gpg/source","name":"gpg-keys"},{"mountPath":"/app/config/gpg/keys","name":"gpg-keyring"},{"mountPath":"/app/config/reposerver/tls","name":"argocd-repo-server-tls"}]}],"volumes":[{"configMap":{"name":"argocd-ssh-known-hosts-cm"},"name":"ssh-known-hosts"},{"configMap":{"name":"argocd-tls-certs-cm"},"name":"tls-certs"},{"configMap":{"name":"argocd-gpg-keys-cm"},"name":"gpg-keys"},{"emptyDir":{},"name":"gpg-keyring"},{"name":"argocd-repo-server-tls","secret":{"items":[{"key":"tls.crt","path":"tls.crt"},{"key":"tls.key","path":"tls.key"},{"key":"ca.crt","path":"ca.crt"}],"optional":true,"secretName":"argocd-repo-server-tls"}}]}}}}
  creationTimestamp: "2021-06-29T22:49:32Z"
  generation: 1
  labels:
    app.kubernetes.io/component: repo-server
    app.kubernetes.io/name: argocd-repo-server
    app.kubernetes.io/part-of: argocd
  name: argocd-repo-server
  namespace: argocd
  resourceVersion: "57502420"
  selfLink: /apis/apps/v1/namespaces/argocd/deployments/argocd-repo-server
  uid: f73e5304-d16e-4783-aad4-9e3ac8b7c565
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/name: argocd-repo-server
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/name: argocd-repo-server
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/name: argocd-repo-server
              topologyKey: kubernetes.io/hostname
            weight: 100
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/part-of: argocd
              topologyKey: kubernetes.io/hostname
            weight: 5
      automountServiceAccountToken: false
      containers:
      - command:
        - uid_entrypoint.sh
        - argocd-repo-server
        - --redis
        - argocd-redis:6379
        image: quay.io/argoproj/argocd:v2.0.4
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz?full=true
            port: 8084
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1
        name: argocd-repo-server
        ports:
        - containerPort: 8081
          protocol: TCP
        - containerPort: 8084
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8084
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - all
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /app/config/ssh
          name: ssh-known-hosts
        - mountPath: /app/config/tls
          name: tls-certs
        - mountPath: /app/config/gpg/source
          name: gpg-keys
        - mountPath: /app/config/gpg/keys
          name: gpg-keyring
        - mountPath: /app/config/reposerver/tls
          name: argocd-repo-server-tls
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: argocd-ssh-known-hosts-cm
        name: ssh-known-hosts
      - configMap:
          defaultMode: 420
          name: argocd-tls-certs-cm
        name: tls-certs
      - configMap:
          defaultMode: 420
          name: argocd-gpg-keys-cm
        name: gpg-keys
      - emptyDir: {}
        name: gpg-keyring
      - name: argocd-repo-server-tls
        secret:
          defaultMode: 420
          items:
          - key: tls.crt
            path: tls.crt
          - key: tls.key
            path: tls.key
          - key: ca.crt
            path: ca.crt
          optional: true
          secretName: argocd-repo-server-tls

@evercast-chris
Copy link
Author

@devstein any suggestions on this? I think I've tried just about everything to get this init container to patch correctly. Thanks.

@devstein
Copy link
Collaborator

Hey @evercast-chris. I assume you have already, but if you haven't tried from a fresh, non-running deployment of Argo CD I would recommend starting there. This is what the patch in the README is intended for.

The only alternative would be to pursue the custom docker image approach instead.

@evercast-chris
Copy link
Author

@devstein thanks dev, i did attempt to patch this to an unedited repo-server file but I keep getting the no changes patched command in the terminal. I did try the Docker image once but wasn't able to successfully build the image, are there more detailed steps you can provide other than just the image? I wish there was more information out there on ksops, like a demo, some documentation with instructions on how to use the image correctly with argocd, etc.

@devstein
Copy link
Collaborator

Sorry to hear @evercast-chris -- for the image, once it's successfully built you can use it in the argocd-repo-server deployment.

Adding more links to examples, or including an example in the repo is a good idea. Here are two links to examples that I know of that could be helpful

@evercast-chris
Copy link
Author

@devstein Thanks Dev, im sure im missing something on my end. I think im going to give building the image another try. Awesome thank you for the additional resources this is helpful. I appreciate the time!

@evercast-chris
Copy link
Author

@devstein you were right the patch did work on a brand new argocd environment, which is definitely progress thanks! I am running ksops locally and it seems to work fine with the secret.yaml I am testing. However, I am getting an error from ArgoCD, im not exactly sure what the error is saying, but I believe I may have the secret-generator in the wrong folder, I am following the references you sent me and locally everything is good. Any clue on how what this error might be saying?

Unable to create application: application spec is invalid: InvalidSpecError: Unable to get app details: rpc error: code = Unknown desc = kustomize build /tmp/[email protected]_evercast_kustomize-fev3/overlays/qa/uset1-008 --enable-alpha-plugins failed exit status 1: trouble decrypting file Error getting data key: 0 successful groups required, got 0Error: accumulating resources: accumulation err='accumulating resources from '../../../base': '/tmp/[email protected]_evercast_kustomize-fev3/base' must resolve to a file': recursed accumulation of path '/tmp/[email protected]_evercast_kustomize-fev3/base': failure in plugin configured via /tmp/kust-plugin-config-017721842; exit status 1: exit status 1

@devstein
Copy link
Collaborator

Glad you made progress @evercast-chris!

That error is hard to parse, but looking this message failed exit status 1: trouble decrypting file Error getting data key: 0 successful groups required, got 0 it is clear that Argo CD is loading the ksops generator manifest and trying to decrypt one of the encrypted files referenced.

Does Argo CD have access to the private keys to decrypt this secret? If you are using a PGP key, I recommend looking at this previous issue #24 for tips

@evercast-chris
Copy link
Author

@devstein actually this one doesn't since its a new argocd cluster, thank you for the reminder! I'll keep you posted on the results it might just be the missing piece needed.

@evercast-chris
Copy link
Author

@devstein still troubleshooting I did find this error in the running pod for argo-repo-server and it's definitely not able to find the gpg-keys. I don't have this mounted path. Im wondering if you have seen this issue before? I tried to place the gpg keys in argocd but its kinda not that simple since it requires an ASCII-armored. Anyways, wondering im wondering if there is an efficient way to correct this?
Warning FailedMount 85s kubelet MountVolume.SetUp failed for volume "gpg-keys" : failed to sync configm

          name: gpg-keys
        - mountPath: /app/config/gpg/keys

@devstein
Copy link
Collaborator

Hey @evercast-chris I have not; however, if you haven't I would try referencing #24 and https://github.com/james-callahan/example-gitops/tree/master/argocd for potential solutions

@evercast-chris
Copy link
Author

@devstein I have to be getting pretty close but im wondering if you have seen this error before? If so do you happen to know what it might mean? It is longer than this but don't want to overwhelm. Thank you!

`[devops|argocd] ➜  uset1-006 git:(feat/argo-testing) kustomize build --enable-alpha-plugins .
**panic: number of previous names not equal to number of previous namespaces**
goroutine 1 [running]:
sigs.k8s.io/kustomize/api/resource.(*Resource).PrevIds(0xc000101d10, 0xc000e88bd0, 0xc00004e900, 0xc00004e900)
	/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/sigs.k8s.io/kustomize/[email protected]/resource/resource.go:434 +0x3f1
sigs.k8s.io/kustomize/api/resource.(*Resource).OrgId(0xc000101d10, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/sigs.k8s.io/kustomize/[email protected]/resource/resource.go:414 +0x65
sigs.k8s.io/kustomize/api/builtins.(*PrefixSuffixTransformerPlugin).Transform(0xc000a78f00, 0x4889f38, 0xc0004a2510, 0x0, 0x0)
	/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/sigs.k8s.io/kustomize/[email protected]/builtins/PrefixSuffixTransformer.go:52 +0xa5
sigs.k8s.io/kustomize/api/internal/target.(*multiTransformer).transform(0xc0005955a8, 0x4889f38, 0xc0004a2510, 0xc000b08601, 0x5'

@devstein
Copy link
Collaborator

@evercast-chris I have not, but it looks like it's not specific to KSOPS. Want to share your kustomization.yaml?

@evercast-chris
Copy link
Author

@devstein yes this does look more like a sops error than a ksops issue. Do I need include a secret-generator and a secret ksops yaml file as well? This is the kustomization file I have in my overlay. Secret.enc.yml is my encrypted file.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: evercast-qa

bases:
- ../../../base

patchesStrategicMerge:
- ingress.yml
- image.yml

resources:
- secret.enc.yml

@devstein
Copy link
Collaborator

Yes you do. Try this

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: evercast-qa

bases:
- ../../../base

patches:
- ingress.yml
- image.yml

generators:
- secret-generator.yaml

where secret-generator.yaml looks like

apiVersion: viaduct.ai/v1
kind: ksops
metadata:
  name: secret-generator
files:
  - ./secret.enc.yml

@evercast-chris
Copy link
Author

@devstein ok cool, I also have this secret generator setup in my base kustomization as well. One secret-generator.yaml in base and one in the overlay of choice? I also think my aws kms permission in my .sops yaml file might be making things more complicated than needed to be.

@devstein
Copy link
Collaborator

@evercast-chris Oh, in that case what is the goal of referencing

resources:
- secret.enc.yml

in the overlay?

@evercast-chris
Copy link
Author

evercast-chris commented Oct 28, 2021

@devstein so when im creating an application on argo, as long as I reference the secret.enc.yaml from my base folder in kustomization.yaml it will recognize that there is an encrypted secret. However it will wont fully sync the argocd application. I thought perhaps it is because the secret I am referencing in my base folder is not in my cluster. Then, I did a kubectl apply -f of the secret.enc.yaml in my cluster argocd is deployed on, but still cannot get the secret to fully sync in the argocd application. The only time I could get a fully sync on a secret manifest is when it was unencrypted. Sooo, my guess was to apply the secret in the overlay and maybe argo will fully recognize it, but unfortunately it gives a long error on argocd and it also does not kustomize build locally from the overlay file.

@devstein
Copy link
Collaborator

@evercast-chris You never want to directly reference a SOPS encrypted secrets file in kustomization.yanl because it isn't a valid K8s manifests. This is the reason to use KSOPS. Instead of directly referencing the secret.enc.yml file, you should reference the KSOPS generator file secret-generator.yaml in the kustomization.yaml, which decrypts and generates a valid K8s manifest for you.

In general, secrets are often overlay specific, so it typically makes sense to use KSOPS in the overlay.

@evercast-chris
Copy link
Author

@devstein this is great to know, I was actually following the documentation at https://dev.to/stack-labs/gitops-demo-with-argo-cd-and-ksops-on-gke-2a0l where the ksops secret and generator are in the base folder. Is there where it should be stored or only in the overlay folder which I am trying to create the argocd application for, or both? Sorry for all the questions, I appreciate the patience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants