-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
From fluent-bit to es: [ warn] [engine] failed to flush chunk #5145
Comments
version of docker image
and the index is any way to skip create index if exists? |
set config of es'Output
works on |
Hi @yangtian9999. Can you please enable debug log level and share the log? If you see network-related messages, this may be an issue we already fixed in 1.8.15. Otherwise, share steps to reproduce, including your config. |
[2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records |
tried 1.9.0, 1.8.15. 1.8.12 all got same error |
@dezhishen I set the "Write_Operation upsert", then pod error, did not start fluent-bit normally. This error happened for 1.8.12/1.8.15/1.9.0 |
It seems that you're trying to create a new index with dots on its name.
Try setting replace_dots to true/on. |
[2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69336502 file has been deleted: /var/log/containers/hello-world-bjfnf_argo_wait-8f0faa126a1c36d4e0d76e1dc75485a39ecc2d43a4efc46ae7306f4b86ea9964.logFri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69336502 removing file name /var/log/containers/hello-world-bjfnf_argo_wait-8f0faa126a1c36d4e0d76e1dc75485a39ecc2d43a4efc46ae7306f4b86ea9964.log |
Hi, @lecaros still got error, already add setting
I am wondering that I should update es version to the latest 7 version. |
Hi @yangtian9999 |
I had similar issues with |
@lecaros Kibana 7.6.2 management. es 7.6.2 fluent/fluent-bit 1.8.15 |
I am getting the same error. I have deployed the official helm chart version 0.19.23. I only changed the output config since its a subchart. I have also set fluent-bit:
enabled: true
config:
outputs: |
[OUTPUT]
Name es
Match kube.*
Host {{ .Release.Name }}-elasticsearch-master
Logstash_Format On
Retry_Limit False
Replace_Dots On
[OUTPUT]
Name es
Match host.*
Host {{ .Release.Name }}-elasticsearch-master
Logstash_Format On
Logstash_Prefix node
Retry_Limit False
Replace_Dots On |
hi @yangtian9999 To those having this same issue, can you share your config and log files with debug level enabled? |
In my case the root cause of the error was
In the ES output configuration, I had |
@evheniyt thanks. I will set this then. |
Hi @yangtian9999, can you confirm you still experiencing this issue? |
hi @lecaros I think this was only [warn] message, I checked with es, I can search the right apps logs. |
I use 2.0.6,no matter set Type _doc or Replace_Dots On,i still see mass warn log above. |
as #3301 (comment) said,I add Trace_Error On to show more log,then i found the reason is https://github.com/fluent/fluent-bit/issues/4386.you must delete the exist index,otherwise even you add Replace_Dots,you still see the warn log. |
Guy, I need help on below. Settings like this, for OUTPUT section:
Problems like below, while using elasticSearch+kibana in v8.9
Find-out THIS: Related to this section:
Question, how I can achieve to solve above Error in my case ? Please help on this. |
Bug Report
Describe the bug
Continuing logging in pod fluent-bit-84pj9
[2022/03/22 03:48:51] [ warn] [engine] failed to flush chunk '1-1647920930.175942635.flb', retry in 11 seconds: task_id=735, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:48:51] [ warn] [engine] failed to flush chunk '1-1647920894.173241698.flb', retry in 58 seconds: task_id=700, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:46] [ warn] [engine] failed to flush chunk '1-1647920587.172892529.flb', retry in 92 seconds: task_id=394, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:47] [ warn] [engine] failed to flush chunk '1-1647920384.178898202.flb', retry in 181 seconds: task_id=190, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:47] [ warn] [engine] failed to flush chunk '1-1647920812.174022994.flb', retry in 746 seconds: task_id=619, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920205.172447077.flb', retry in 912 seconds: task_id=12, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920426.171646994.flb', retry in 632 seconds: task_id=233, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920802.180669296.flb', retry in 1160 seconds: task_id=608, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920969.178403746.flb', retry in 130 seconds: task_id=774, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920657.177210280.flb', retry in 1048 seconds: task_id=464, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:49] [ warn] [engine] failed to flush chunk '1-1647920670.171006292.flb', retry in 1657 seconds: task_id=477, input=tail.0 > output=es.0 (out_id=0)
[2022/03/22 03:57:49] [ warn] [engine] failed to flush chunk '1-1647920934.181870214.flb', retry in 786 seconds: task_id=739, input=tail.0 > output=es.0 (out_id=0)
To Reproduce
use helm to install helm-charts-fluent-bit-0.19.19. fluent/fluent-bit 1.8.12
edit the value.yaml, change to es ip, 10.3.4.84 is the es ip address.
[OUTPUT]
Name es
Match kube.*
Host 10.3.4.84
Logstash_Format On
Retry_Limit False
[OUTPUT]
Name es
Match host.*
Host 10.3.4.84
Logstash_Format On
Logstash_Prefix node
Retry_Limit False
keep other configs in value.yaml file by default.
Expected behavior
all logs sent to es, and display at kibana
Screenshots
Your Environment
Version used: helm-charts-fluent-bit-0.19.19.
Configuration:
[OUTPUT]
Name es
Match kube.*
Host 10.3.4.84
Logstash_Format On
Retry_Limit False
[OUTPUT]
Name es
Match host.*
Host 10.3.4.84
Logstash_Format On
Logstash_Prefix node
Retry_Limit False
Environment name and version (e.g. Kubernetes? What version?): k3s 1.19.8, use docker-ce backend, 20.10.12. Kibana 7.6.2 management. es 7.6.2 fluent/fluent-bit 1.8.12
Server type and version:
Operating System and version: centos 7.9, kernel 5.4 LT
Filters and plugins:
edit the value.yaml, change to es ip, 10.3.4.84 is the es ip address. no tls required for es.
[OUTPUT]
Name es
Match kube.*
Host 10.3.4.84
Logstash_Format On
Retry_Limit False
[OUTPUT]
Name es
Match host.*
Host 10.3.4.84
Logstash_Format On
Logstash_Prefix node
Retry_Limit False
keep other configs in value.yaml file by default.
Additional context
no new logs send to es
The text was updated successfully, but these errors were encountered: