Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filebeat autodiscovery close after docker events restart #12558

Closed
augustomarinho opened this issue Jun 15, 2019 · 11 comments
Closed

Filebeat autodiscovery close after docker events restart #12558

augustomarinho opened this issue Jun 15, 2019 · 11 comments
Labels
Filebeat Filebeat Team:Integrations Label for the Integrations team

Comments

@augustomarinho
Copy link

augustomarinho commented Jun 15, 2019

Hello, I'm using filebeat autodiscovery configuration for docker containers and identified one possible issue. When docker container restart, the autodiscovery close the reader for this containers.

What should be expected in this scenario?

Is there one way to configure autodiscovery to return read the same file?

I started filebeat container this way:
docker run -d \ --name=filebeat \ --user=root \ --volume="/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro" \ --volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \ --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \ --volume="/var/log:/var/log:ro" \ --volume="/filebeat/data:/usr/share/filebeat/data:rw" \ docker.elastic.co/beats/filebeat:7.0.0 filebeat -e -d autodiscover,docker -strict.perms=false -E beat.name=hostname -I | cut -d' ' -f1 -E output.elasticsearch.hosts=["hostname -I | cut -d' ' -f1:9200"]

Following same parts of my log file:

2019-06-15T21:22:24.985Z DEBUG [docker] docker/watcher.go:239 Got a new docker event: {kill 5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 augustomarinho/springboot-fluentd-appender container kill {5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 map[image:augustomarinho/springboot-fluentd-appender name:springboot-app send.logs:true signal:15 elastic_index:springboot]} local 1560633744 1560633744984858742} 2019-06-15T21:22:35.018Z DEBUG [docker] docker/watcher.go:239 Got a new docker event: {kill 5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 augustomarinho/springboot-fluentd-appender container kill {5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 map[elastic_index:springboot image:augustomarinho/springboot-fluentd-appender name:springboot-app send.logs:true signal:9]} local 1560633755 1560633755018303395} 2019-06-15T21:22:35.145Z DEBUG [docker] docker/watcher.go:239 Got a new docker event: {die 5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 augustomarinho/springboot-fluentd-appender container die {5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 map[send.logs:true elastic_index:springboot exitCode:137 image:augustomarinho/springboot-fluentd-appender name:springboot-app]} local 1560633755 1560633755145648212} 2019-06-15T21:22:35.886Z DEBUG [docker] docker/watcher.go:239 Got a new docker event: {stop 5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 augustomarinho/springboot-fluentd-appender container stop {5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 map[elastic_index:springboot image:augustomarinho/springboot-fluentd-appender name:springboot-app send.logs:true]} local 1560633755 1560633755886105717} 2019-06-15T21:22:37.176Z DEBUG [docker] docker/watcher.go:239 Got a new docker event: {start 5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 augustomarinho/springboot-fluentd-appender container start {5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 map[elastic_index:springboot image:augustomarinho/springboot-fluentd-appender name:springboot-app send.logs:true]} local 1560633757 1560633757175968422} 2019-06-15T21:22:37.177Z DEBUG [docker] docker/watcher.go:239 Got a new docker event: {restart 5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 augustomarinho/springboot-fluentd-appender container restart {5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 map[elastic_index:springboot image:augustomarinho/springboot-fluentd-appender name:springboot-app send.logs:true]} local 1560633757 1560633757176005622} 2019-06-15T21:22:37.177Z DEBUG [autodiscover] autodiscover/autodiscover.go:165 Got a start event: map[provider:02d9776d-9584-4261-b2fe-7a976452ede0 host:172.17.0.2 container:{"id":"5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080","image":{"name":"augustomarinho/springboot-fluentd-appender"},"labels":{"elastic_index":"springboot","send":{"logs":"true"}},"name":"springboot-app"} meta:{"container":{"id":"5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080","image":{"name":"augustomarinho/springboot-fluentd-appender"},"name":"springboot-app"},"docker":{"container":{"labels":{"elastic_index":"springboot","send_logs":"true"}}}} config:[0xc000504180] id:5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080 start:true port:8080 docker:{"container":{"id":"5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080","image":"augustomarinho/springboot-fluentd-appender","labels":{"elastic_index":"springboot","send":{"logs":"true"}},"name":"springboot-app"}}] 2019-06-15T21:22:37.177Z DEBUG [autodiscover] autodiscover/autodiscover.go:183 Generated configs: [0xc000504180] 2019-06-15T21:22:37.177Z DEBUG [autodiscover] autodiscover/autodiscover.go:243 Got a meta field in the event 2019-06-15T21:22:37.177Z DEBUG [autodiscover] autodiscover/autodiscover.go:203 Config &{{<nil> } <nil> 0xc0004f8a80} is already running 2019-06-15T21:22:42.367Z ERROR readjson/json.go:52 Error decoding JSON: EOF 2019-06-15T21:22:42.368Z ERROR readjson/json.go:52 Error decoding JSON: invalid character '.' looking for beginning of value 2019-06-15T21:22:42.368Z ERROR readjson/json.go:52 Error decoding JSON: invalid character '/' looking for beginning of value 2019-06-15T21:22:42.368Z ERROR readjson/json.go:52 Error decoding JSON: invalid character '(' looking for beginning of value 2019-06-15T21:22:42.368Z ERROR readjson/json.go:52 Error decoding JSON: invalid character '\\' looking for beginning of value 2019-06-15T21:22:42.368Z ERROR readjson/json.go:52 Error decoding JSON: invalid character '\'' looking for beginning of value 2019-06-15T21:22:42.368Z ERROR readjson/json.go:52 Error decoding JSON: invalid character '=' looking for beginning of value 2019-06-15T21:22:42.369Z ERROR readjson/json.go:52 Error decoding JSON: invalid character ':' looking for beginning of value 2019-06-15T21:22:42.369Z ERROR readjson/json.go:52 Error decoding JSON: EOF 2019-06-15T21:23:35.147Z DEBUG [autodiscover] autodiscover/autodiscover.go:228 Stopping 1 configs 2019-06-15T21:23:35.147Z DEBUG [autodiscover] cfgfile/list.go:62 Starting reload procedure, current runners: 1 2019-06-15T21:23:35.147Z DEBUG [autodiscover] cfgfile/list.go:80 Start list: 0, Stop list: 1 2019-06-15T21:23:35.147Z DEBUG [autodiscover] cfgfile/list.go:84 Stopping runner: input [type=docker, ID=457171585444322192] 2019-06-15T21:23:35.147Z INFO input/input.go:149 input ticker stopped 2019-06-15T21:23:35.147Z INFO input/input.go:167 Stopping Input: 457171585444322192 2019-06-15T21:23:35.147Z INFO log/harvester.go:275 Reader was closed: /var/lib/docker/containers/5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080/5d32deaaacfd496d0a8f61a3dea5d9a10fba9496f7fd797e45c8282e8498e080-json.log. Closing.

My filbeat.yml file:
`filebeat.config.inputs:
enabled: true
path: /filebeat/*.yml
reload.enabled: true
reload.period: 10s

filebeat.autodiscover:
providers:
- type: docker
json.keys_under_root: true
json.add_error_key: true
json.message_key: log
scan_frequency: 1s
templates:
- condition:
and:
- has_fields: ['docker.container.labels.elastic_index', 'docker.container.labels.send.logs']
- equals:
docker.container.labels.send.logs: "true"
config:
- type: docker
json.keys_under_root: true
json.add_error_key: true
containers:
ids:
- "${data.docker.container.id}"

setup.template.enabled: false
setup.template.name: "springboot"
setup.template.pattern: springboot-*"

output.elasticsearch:
protocol: http
index: "springboot-%{[agent.version]}-%{+yyyy.MM.dd}"
worker: 2
bulk_max_size: 500`

@ph ph added the Filebeat Filebeat label Jun 18, 2019
@ph
Copy link
Contributor

ph commented Jun 18, 2019

IIRC filebeat should still read the file after a restart, looking at the provided log I see an error while parsing the JSON log. I believe something like this was fixed recently. @exekias Can you clarify the behavior?

@augustomarinho
Copy link
Author

augustomarinho commented Jun 18, 2019

@ph this error are an ascii chars from SpringBoot default banner. I'm not sure if this impact the auto discovery behavior.

@exekias
Copy link
Contributor

exekias commented Jun 19, 2019

#12268 was recently merged, I guess it would fix this issue? We have seen errors like this some times, with that fix Filebeat should be able to recover from them

@augustomarinho
Copy link
Author

@exekias First, Thanks for comment.
I read this post #12268 and I was curious to take a test. I set up my spring boot application disabling springboot default banner with this property -Dspring.main.banner-mode=off and the result was the same as previous.
In my case, I believe there aren't other parts of my code has non expected chars during spring boot initialization.
If I understand right, this issue related to AutoDiscovery feature is not only related a non expected chars.
Let me know if my thoughts make sense or not.

@exekias
Copy link
Contributor

exekias commented Jun 24, 2019

Understood, could you perhaps share the log file that is failing for you (the one generated by docker under /var/lib/docker/containers)? we can give it a try and check if the input is able to ingest it. If not, that will help us fix the issue.

Please have a look to the file and make sure you don't share any sensitive data

@augustomarinho
Copy link
Author

augustomarinho commented Jun 25, 2019

@exekias Good evening, following the result of my tests.

I created two gist files for help us to clarify this analysis. The first gist file (filebeat-logs) contains filebeat log informations. As we can see in the line 73, filebeat started the Harvester.

The second gist file (springboot-app-logs) contains logs about SpringBoot Application without default banner, as we can seen until line 28. After line 28, I called my test endpoint (http://localhost:8002/query/cpf/12345678909) to register some informations. Until now, everything It's Ok.

Returning our analysis for the filbeat gist file, we can identify in the lines 74-90 that filebeat identified docker events (kill, die, stop, start and restart). After that, the filebeat close the reader for this application as we can identify in the line 103. After that, the filebeat only return to send logs after docker restart.

I don't identified any lines with non expected chars in this example. Let me know if this is enough to continue this analysis or not.

One more time, thanks for your interest to help me to solve this issue.

If you have an opportunity to run my application, is very simple:

  1. git clone [email protected]:augustomarinho/springboot-fluentd-appender.git
  2. mvn clean install docker:build
  3. docker run -d --name springboot-app
    --label elastic_index=springboot
    --label send.logs=true
    --volume="/app/springboot-app/config/:/config/:rw"
    -e JAVA_OPTS="-Dspring.main.banner-mode=off -Dlogging.config=./config/logback-spring.xml"
    -p 8002:8080
    augustomarinho/springboot-fluentd-appender
  4. curl -X GET http://localhost:8002/query/cpf/12345678909 -v

Filebeat configurations:

filebeat.config.inputs:
  enabled: true
  path: /filebeat/*.yml
  reload.enabled: true
  reload.period: 1s

name: 192.168.0.111

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            and:
              - has_fields: ['docker.container.labels.elastic_index', 'docker.container.labels.send.logs']
              - equals:
                  docker.container.labels.send.logs: "true"
          config:
            - type: docker
              json.keys_under_root: true
              json.add_error_key: true
              json.message_key: log
              exclude_lines: ["^\\s+[\\-`('.|_]"]  # drop asciiart lines
              containers:
                ids:
                  - "${data.docker.container.id}"

processors:
  - add_docker_metadata: ~

setup.template.enabled: false
setup.template.name: "springboot"
setup.template.pattern: springboot-*"

output.elasticsearch:
  protocol: http
  index: "springboot-%{[agent.version]}-%{+yyyy.MM.dd}"
  worker: 2
  bulk_max_size: 500

@augustomarinho
Copy link
Author

@exekias I'd like to know if there are some news about this topic. Can I help more?

@jsoriano
Copy link
Member

@augustomarinho could this be the same issue as #12962? In that case you can try to mitigate the problem by setting cleanup_timeout: 0 in the docker autodiscover configuration.

@michalpristas
Copy link
Contributor

Hey @augustomarinho was suggested mitigation successful ?

@augustomarinho
Copy link
Author

@michalpristas Unfortunately no. I gave up to continue with autodiscovery feature, but I didn't test this funcionality in newer Filebeat version.
In my case, in production env I used input from files and working well.

@andresrc andresrc added Team:Integrations Label for the Integrations team and removed Team:Beats labels Mar 6, 2020
@jsoriano
Copy link
Member

@augustomarinho, thanks for all the details.
As you have found a configuration that works well for you and I think the original issue should be fixed since 7.4.0 I am going to close this issue.
Please reopen if you try to use autodiscover again in a recent version and you find the same issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Filebeat Filebeat Team:Integrations Label for the Integrations team
Projects
None yet
Development

No branches or pull requests

6 participants