Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spec.template.spec.containers[0].image: Required value #38

Open
chuegel opened this issue Jan 1, 2020 · 15 comments
Open

spec.template.spec.containers[0].image: Required value #38

chuegel opened this issue Jan 1, 2020 · 15 comments

Comments

@chuegel
Copy link

chuegel commented Jan 1, 2020

After upgrading to 0.0.23 I get following error when I want to inject a proxy container:

time="2019-12-31T18:03:23Z" level=error msg="Deployment.apps \"http-svc\" is invalid: spec.template.spec.containers[0].image: Required value"

time="2019-12-31T18:03:23Z" level=info msg="Updated service... http-svc"

This is the deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: http-svc 
  annotations:
    authproxy.stakater.com/enabled: "true"
    authproxy.stakater.com/redirection-url: http://hello.xxxxx.com
    authproxy.stakater.com/resources: uri=/*|roles=g-xxxx-Admin|require-any-role=true
    authproxy.stakater.com/source-service-name: "http-svc"
    authproxy.stakater.com/target-port: "3000"
    authproxy.stakater.com/upstream-url: http://127.0.0.1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: http-svc
  template:
    metadata:
      labels:
        app: http-svc
    spec:
      containers:
      - name: http-svc
        image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1
        ports:
        - containerPort: 8080
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP

---

apiVersion: v1
kind: Service
metadata:
  name: http-svc
  labels:
    app: http-svc
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: http-svc
@chuegel chuegel changed the title spec.template.spec.containers[0].image: Required value" spec.template.spec.containers[0].image: Required value Jan 1, 2020
@chuegel
Copy link
Author

chuegel commented Jan 1, 2020

Nevermind, I had a typo in the config.

@chuegel chuegel closed this as completed Jan 1, 2020
@SebastienTolron
Copy link

Hey ,

Having the same issue.

What was the typo ?

@usamaahmadkhan
Copy link
Contributor

@huegelc can you help here?

@chuegel
Copy link
Author

chuegel commented Jan 13, 2020

@Stolr

I had a typo in my deployment yaml. This is a working example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: http-svc-v2
  annotations:
    authproxy.stakater.com/enabled: "true"
    authproxy.stakater.com/redirection-url: https://hello.example.com
    authproxy.stakater.com/resources: uri=/*|roles=g-xxxx-Admin|require-any-role=true
    authproxy.stakater.com/source-service-name: http-svc
    authproxy.stakater.com/target-port: "3000"
    authproxy.stakater.com/upstream-url: http://127.0.0.1:8080 
spec:
  replicas: 1
  selector:
    matchLabels:
      app: http-svc-v2
  template:
    metadata:
      labels:
        app: http-svc-v2
    spec:
      containers:
      - name: http-svc-v2
        image: "gcr.io/kubernetes-e2e-test-images/echoserver:2.1"
        ports:
        - containerPort: 8080
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP

---

apiVersion: v1
kind: Service
metadata:
  name: http-svc-v2
  labels:
    app: http-svc-v2
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: http-svc-v2

@echel0n
Copy link

echel0n commented Jan 14, 2020

This is still an issue for me

@usamaahmadkhan
Copy link
Contributor

@echel0n @Stolr can you confirm that this is only for v0.0.23? Also plz share

  • k8s version
  • yaml manifests you are using

@SebastienTolron
Copy link

@usamaahmadkhan : I only tried with v0.0.23 not with earlier version ( Earlier was not working #36 ) .

K8S : 1.16.4

Configmap :

proxyconfig:
  gatekeeper-image : "keycloak/keycloak-gatekeeper:6.0.1"
  client-id: "k8s"
  client-secret: ${CLIENTSECRET}
  enable-default-deny: true
  secure-cookie: false
  verbose: true
  enable-logging: true
  listen: 0.0.0.0:80
  cors-origins:
    - '*'
  cors-methods:
    - GET
    - POST
  resources:
    - uri: '/*'
  scopes:
    - 'good-service'

( Converted to a secret )

proxyinjector

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: proxyinjector
    group: com.stakater.platform
    provider: stakater
    version: v0.0.23
    chart: "proxyinjector-v0.0.23"
    release: "release-name"
    heritage: "Tiller"
  name: proxyinjector
  namespace: security

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app: proxyinjector
    group: com.stakater.platform
    provider: stakater
    version: v0.0.23
    chart: "proxyinjector-v0.0.23"
    release: "release-name"
    heritage: "Tiller"
  name: proxyinjector-role
  namespace: security
rules:
  - apiGroups:
      - ""
      - "extensions"
      - "apps"
    resources:
      - deployments
      - daemonsets
      - statefulsets
      - services
      - configmaps
    verbs:
      - list
      - get
      - watch
      - update
      - create
      - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: proxyinjector
    group: com.stakater.platform
    provider: stakater
    version: v0.0.23
    chart: "proxyinjector-v0.0.23"
    release: "release-name"
    heritage: "Tiller"
  name: proxyinjector-role-binding
  namespace: security
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: proxyinjector-role
subjects:
  - kind: ServiceAccount
    name: proxyinjector
    namespace: security

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: proxyinjector
    group: com.stakater.platform
    provider: stakater
    version: v0.0.23
    chart: "proxyinjector-v0.0.23"
    release: "release-name"
    heritage: "Tiller"
  name: proxyinjector
  namespace: security
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: proxyinjector
      group: com.stakater.platform
      provider: stakater
  template:
    metadata:
      labels:
        app: proxyinjector
        group: com.stakater.platform
        provider: stakater
    spec:
      containers:
        - env:
            - name: CONFIG_FILE_PATH
              value: "/etc/ProxyInjector/config.yml"
          image: "stakater/proxyinjector:v0.0.23"
          imagePullPolicy: IfNotPresent
          name: proxyinjector
          volumeMounts:
            - mountPath: /etc/ProxyInjector
              name: config-volume
      serviceAccountName: proxyinjector
      volumes:
        - secret:
            secretName: proxyinjector
          name: config-volume

And here is a deployment I'm trying to anotate

apiVersion: apps/v1
kind: Deployment
metadata:
  name: clusterinfo
  namespace: kube-system
  labels:
    app: clusterinfo
  annotations:
    authproxy.stakater.com/enabled: "true"
    authproxy.stakater.com/redirection-url: https://sso.tolron.fr
    authproxy.stakater.com/source-service-name: clusterinfo
    authproxy.stakater.com/target-port: "3000"
    authproxy.stakater.com/upstream-url: http://127.0.0.1:8080
spec:
  replicas: 1
  selector:
    matchLabels:
      app: clusterinfo
  template:
    metadata:
      labels:
        app: clusterinfo
    spec:
      containers:
        - name: clusterinfo
          image: "stolron/clusterinfo"
          imagePullPolicy: Always

@kw-jk
Copy link

kw-jk commented Jan 20, 2020

I am also having the same issue here.
Not sure what's wrong.

@kw-jk
Copy link

kw-jk commented Jan 20, 2020

nvm i managed to get it working.
For those who are having the same issue, you can remove the proxyconfig: in the configmap.
So that it look like the one below for example.

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: proxyinjector
    version: v0.0.23
    group: com.stakater.platform
    provider: stakater
    chart: "proxyinjector-v0.0.23"
    release: "proxyinjector"
    heritage: "Tiller"
  name: proxyinjector
data:
  config.yml: |-
      gatekeeper-image : "keycloak/keycloak-gatekeeper:6.0.1"
      enable-default-deny: true
      secure-cookie: false
      verbose: true
      enable-logging: true
      cors-origins:
      - '*'
      cors-methods:
      - GET
      - POST
      resources:
      - uri: '/*'
      scopes:
      - 'good-service'

@SebastienTolron
Copy link

Is there any news about this ?

@usamaahmadkhan
Copy link
Contributor

@Stolr remove the proxyconfig: in the configmap like described by @kw-jk above. PRs are welcome for a permanent fix. :)

@SebastienTolron
Copy link

Awesome @usamaahmadkhan thanks !

@pravinkhot123
Copy link

I am also getting same issue:
ERROR::

Error: cannot patch "search-manager" with kind Deployment: Deployment.apps "search-manager" is invalid: spec.template.spec.containers[1].image: Required value

│ with module.service_e2e.helm_release.main,
│ on ../../../../modules/search-manager/helm.tf line 1, in resource "helm_release" "main":
│ 1: resource "helm_release" "main" {

Deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "alto-default.fullname" . }}
labels:
{{- include "alto-default.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "alto-default.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "alto-default.labels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "alto-default.serviceAccountName" . }}
{{- if .Values.topologySpreadConstraints }}
topologySpreadConstraints: {{- include "alto-default.tplvalues.render" (dict "value" .Values.topologySpreadConstraints "context" .) | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ include "alto-default.fullname" . }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- include "alto-default.envVars" . | nindent 10 }}
ports:
- name: http
containerPort: {{ .Values.service.targetPort }}
protocol: TCP
{{- if and .Values.metrics.enabled .Values.service.metricsPort }}
- name: metrics
containerPort: {{ .Values.service.metricsPort }}
protocol: TCP
{{- end }}
livenessProbe:
httpGet:
path: {{ .Values.health.livenessProbe.path }}
port: http
periodSeconds: {{ .Values.health.livenessProbe.periodSeconds }}
initialDelaySeconds: {{ .Values.health.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.health.livenessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.health.livenessProbe.failureThreshold }}
successThreshold: {{ .Values.health.livenessProbe.successThreshold }}
readinessProbe:
httpGet:
path: {{ .Values.health.readinessProbe.path }}
port: http
periodSeconds: {{ .Values.health.readinessProbe.periodSeconds }}
initialDelaySeconds: {{ .Values.health.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.health.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.health.readinessProbe.failureThreshold }}
successThreshold: {{ .Values.health.readinessProbe.successThreshold }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.efs.id }}
volumeMounts:
- name: efs-data
mountPath: /mnt/data
{{- end }}
{{- if .Values.sidecars }}
{{- include "alto-default.tplvalues.render" (dict "value" .Values.sidecars "context" $) | nindent 8 }}
{{- end }}
{{- if .Values.efs.id }}
- name: aws-gcp-configmap-volume
mountPath: /var/run/secrets
volumes:
- name: efs-data
persistentVolumeClaim:
claimName: {{ include "alto-default.fullname" . }}
{{- end }}
{{- with .Values.nodeSelector }}
- name: aws-gcp-configmap-volume
configMap:
name: aws-gcp-config
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

Configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
name: aws-gcp-config
data:
aws-gcp-provider-us-qa.json: |
{
"type": "external_account",
"audience": "//iam.googleapis.com/projects/1092006856739/locations/global/workloadIdentityPools/aws-pool-search-manager/providers/aws-pool-searchmanger-provide",
"subject_token_type": "urn:ietf:params:aws:token-type:aws4_request",
"service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken",
"token_url": "https://sts.googleapis.com/v1/token",
"credential_source": {
"environment_id": "aws1",
"region_url": "http://169.254.169.254/latest/meta-data/placement/availability-zone",
"url": "http://169.254.169.254/latest/meta-data/iam/security-credentials",
"regional_cred_verification_url": "https://sts.{region}.amazonaws.com?Action=GetCallerIdentity&Version=2011-06-15"
}
}

@n00bsi
Copy link

n00bsi commented Nov 12, 2024

@chuegel

did you found a solution for that issue ?
have the same with a value.yaml file that worked with 1.6.x

@chuegel
Copy link
Author

chuegel commented Nov 13, 2024

@n00bsi unfortunately I can't tell... It's been 5 years
Post your yaml and I'll try to replicate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants