Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

filebeat readinessprove fails on valid/working deployment #325

Closed
iridian-ks opened this issue Oct 14, 2019 · 3 comments
Closed

filebeat readinessprove fails on valid/working deployment #325

iridian-ks opened this issue Oct 14, 2019 · 3 comments
Labels
bug Something isn't working

Comments

@iridian-ks
Copy link

Chart version:

7.4.0

Kubernetes version:

1.14

Kubernetes provider: E.g. GKE (Google Kubernetes Engine)

Docker for Desktop

Helm Version:

2.18

helm get release output

e.g. helm get elasticsearch (replace elasticsearch with the name of your helm release)

REVISION: 1
RELEASED: Sun Oct 13 18:02:32 2019
CHART: filebeat-7.4.0
USER-SUPPLIED VALUES:
filebeatConfig:
  filebeat.yml: |
    logging:
      json: true
    filebeat.inputs:
    - type: docker
      containers.ids:
      - '*'
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
    # TODO: Kafka should be the first preference once we get a chance to
    # implement it.
    output.file:
      path: "/tmp/filebeat"
      filename: "filebeat"

COMPUTED VALUES:
affinity: {}
extraEnvs: []
extraVolumeMounts: ""
extraVolumes: ""
filebeatConfig:
  filebeat.yml: |
    logging:
      json: true
    filebeat.inputs:
    - type: docker
      containers.ids:
      - '*'
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
    # TODO: Kafka should be the first preference once we get a chance to
    # implement it.
    output.file:
      path: "/tmp/filebeat"
      filename: "filebeat"
fullnameOverride: ""
hostPathRoot: /var/lib
image: docker.elastic.co/beats/filebeat
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.4.0
labels: {}
livenessProbe:
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 5
managedServiceAccount: true
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext:
  privileged: false
  runAsUser: 0
priorityClassName: ""
readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 5
resources:
  limits:
    cpu: 1000m
    memory: 200Mi
  requests:
    cpu: 100m
    memory: 100Mi
secretMounts: []
serviceAccount: ""
terminationGracePeriod: 30
tolerations: []
updateStrategy: RollingUpdate

HOOKS:
MANIFEST:

---
# Source: filebeat/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: logs-filebeat-config
  labels:
    app: "logs-filebeat"
    chart: "filebeat-7.4.0"
    heritage: "Tiller"
    release: "logs"
data:
  filebeat.yml: |
    logging:
      json: true
    filebeat.inputs:
    - type: docker
      containers.ids:
      - '*'
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
    # TODO: Kafka should be the first preference once we get a chance to
    # implement it.
    output.file:
      path: "/tmp/filebeat"
      filename: "filebeat"
---
# Source: filebeat/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: logs-filebeat
  labels:
    app: "logs-filebeat"
    chart: "filebeat-7.4.0"
    heritage: "Tiller"
    release: "logs"
---
# Source: filebeat/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: logs-filebeat-cluster-role
  labels:
    app: "logs-filebeat"
    chart: "filebeat-7.4.0"
    heritage: "Tiller"
    release: "logs"
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - list
  - watch
---
# Source: filebeat/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: logs-filebeat-cluster-role-binding
  labels:
    app: "logs-filebeat"
    chart: "filebeat-7.4.0"
    heritage: "Tiller"
    release: "logs"
roleRef:
  kind: ClusterRole
  name: logs-filebeat-cluster-role
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: logs-filebeat
  namespace: dgilmor-dgilmor-initial-kube-beats
---
# Source: filebeat/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: logs-filebeat
  labels:
    app: "logs-filebeat"
    chart: "filebeat-7.4.0"
    heritage: "Tiller"
    release: "logs"
spec:
  selector:
    matchLabels:
      app: "logs-filebeat"
      release: "logs"
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:

        configChecksum: f94cdd147eb5fe9426b2401f62cb0dfb83ac1cbf25cea91521ccdc7291abdda
      name: "logs-filebeat"
      labels:
        app: "logs-filebeat"
        chart: "filebeat-7.4.0"
        heritage: "Tiller"
        release: "logs"
    spec:
      serviceAccountName: logs-filebeat
      terminationGracePeriodSeconds: 30
      volumes:
      - name: filebeat-config
        configMap:
          defaultMode: 0600
          name: logs-filebeat-config
      - name: data
        hostPath:
          path: /var/lib/logs-filebeat-dgilmor-dgilmor-initial-kube-beats-data
          type: DirectoryOrCreate
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varrundockersock
        hostPath:
          path: /var/run/docker.sock
      containers:
      - name: "filebeat"
        image: "docker.elastic.co/beats/filebeat:7.4.0"
        imagePullPolicy: "IfNotPresent"
        args:
        - "-e"
        - "-E"
        - "http.enabled=true"
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - |
              #!/usr/bin/env bash -e
              curl --fail 127.0.0.1:5066
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 5

        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - |
              #!/usr/bin/env bash -e
              filebeat test output
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 5

        resources:
          limits:
            cpu: 1000m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi

        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        securityContext:
          privileged: false
          runAsUser: 0

        volumeMounts:
        - name: filebeat-config
          mountPath: /usr/share/filebeat/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        # Necessary when using autodiscovery; avoid mounting it otherwise
        # See: https://www.elastic.co/guide/en/beats/filebeat/master/configuration-autodiscover.html
        - name: varrundockersock
          mountPath: /var/run/docker.sock
          readOnly: true

Describe the bug:

The readinessProbe for filebeat runs filebeat test output, but this command explicitly fails for the file and console options.

Steps to reproduce:

  1. Set outputs.console in the values.yaml
  2. Install filebeat

Expected behavior:

Everything works just fine, but the readinessProbe is faulty. It should succeed if everything is working fine.

Provide logs and/or server output (if relevant):

  Warning  Unhealthy  3m (x30 over 7m50s)  kubelet, docker-desktop  Readiness probe failed: file output doesn't support testing

Any additional context:

As an alternative, maybe give an option to override the readinessProbe. A simple boolean or condition to write our own.

@NickCarton
Copy link

Also having the same issue with metricbeat

Reports back Readiness probe failed: Error initializing beat: error initializing processors: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

@polina-mar
Copy link

You can temporary delete deployment readiness probe while it's deploying to get this working. Not optimal workaround. Hopefully it'll get fixed soon

@jmlrt jmlrt added the bug Something isn't working label Oct 23, 2019
@jmlrt
Copy link
Member

jmlrt commented Dec 27, 2019

fixed by #420

@jmlrt jmlrt closed this as completed Dec 27, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants