-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to overwrite @timestamp with different format #11273
Comments
I would think using format for the date field should solve this? https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-date-format.html Closing this for now as I don't think it's a bug in Beats. |
It does not work as it seems not possible to overwrite the date format. |
I now see that you try to overwrite the existing timestamp. We should probably rename this issue to "Allow to overwrite As a work around, is it possible that you name it differently in your json log file and then use an ingest pipeline to remove the original timestamp (we often call it event.created) and move your timestamp to |
Hello, Unfortunately no, it is not possible to change the code of the distributed sytem which populate the log files. right now, I am looking to write my own log parser and send datas directly to elasticsearch (I don't want to use logstash for numerous reasons) so I have one request, |
additionally, pipelining ingestion is too ressource consuming, |
With 7.0 we are switching to ECS, this should mostly solve the problem around conflicts: https://github.com/elastic/ecs Unfortunately there will always a chance for conflicts. If you use What I don't fully understand is if you can deploy your own log shipper to a machine, why can't you change the filebeat config there to use rename? I'm curious to hear more on why using simple pipelines is too resource consuming. Did you run some comparisons here? |
I have the same problem. |
We have added a timestamp processor that could help with this issue. You can tell it what field to parse as a date and it will set the It doesn't directly help when you're parsing JSON containing |
#4836 is a duplicate edit: also reported here: |
This is caused by the fact that the "time" package that beats is using [1] to parse @timestamp from JSON doesn't honor the RFC3339 spec [2], (specifically the part that says that both "+dd:dd" AND "+dddd" are valid timezones) [1]
[2] golang/go#31113 |
Seems like I read the RFC3339 spec to hastily and the part where ":" is optional was from the Appendix that describes ISO8601. |
Could be possible to have an hint about how to do that? Seems like Filebeat prevent "@timestamp" field renaming if used with In my company we would like to switch from logstash to filebeat and already have tons of logs with a custom timestamp that Logstash manages without complaying about the timestamp, the same format that causes troubles in Filebeat. |
You can disable JSON decoding in filebeat and do it in the next stage (logstash or elasticsearch ingest processors). |
Seems like a bit odd to have a poweful tool like Filebeat and discover it cannot replace the timestamp. I mean: storing the timestamp itself in the log row is the simplest solution to ensure the event keep it's consistency even if my filebeat suddenly stops or elastic is unreachable; plus, using a JSON string as log row is one of the most common pattern today. Using an ingest urges me to learn and add another layer to my elastic stack, and imho is a ridiculous tradeoff only to accomplish a simple task. For now, I just forked the beats source code to parse my custom format. |
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue doesn't have a |
For confirmed bugs, please report:
"@timestamp":"2019-03-15T19:41:07.282853+0000"
using filebeat to parse log lines like this one:
returns error as you can see in the following filebeat log:
I use a template file where I define that the @timestamp field is a date:
The text was updated successfully, but these errors were encountered: