-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Filebeat][Checkpoint module] data stream timestamp field [@timestamp] is missing #32380
Comments
Pinging @elastic/security-external-integrations (Team:Security-External Integrations) |
@bbs2web Are you able to provide the (suitably redacted) text of a failing log line or two? |
Hi, Herewith an archive containing 500+ scrubbed logs from a CheckPoint log export. This is the result of having run a CheckPoint native format to CSV (semi colon separated value file) and then having replaced IPs and names. |
Herewith a snippet of the uncompressed data:
|
These will be helpful in general, but what I need to address the problem here is some failing syslog lines; I don't have a definition of how the attributes are mapped into their syslog output, so I would be making things up that I need to know. |
Hi, I'm unfortunately very new to Elastic, is there a debug option to capture the incoming syslog messages in a format that would help? I also have a packet capture but presumed that not to be helpful... |
I think probably the easiest way would be to get some documents that have been ingested like you have in the screenshots in the issue. These have a |
Hi, Many thanks for the guidance, herewith a sample 'message' field for a log entry that wasn't ingested:
I additionally exported a part of a packet capture to CSV using Wireshark, hope this helps:
Hope the above is relevant, please let me know if I can assist with anything. |
Thank you, that is perfect. |
It looks like there were some issues with how that document was being processed due to data being present twice in the new format and the pipeline failing when trying to set an already set field. This happens before the timestamp is set, so that never happens. Being more careful with the duplicated data allows the pipeline to complete and set the timestamp. Now that line would give you something like this (details altered for testing purposes). |
Looking at the pipeline, the original The first one sets it based on the timestamp from the syslog header, but only if there isn't a
The second one sets it based on the time field in the structured data of the log message, if it exists:
As an aside, the I attempted to take the syslog message above and run it through the pipeline simulate API, but I got interesting results... Now this may be because I remove the escapes from the message incorrectly, but I did get a different error (it complained about a duplicate field for the ingress interface, which doesn't make any sense). I did reproduce the issue, though. The resulting document lacked a @efd6, any thoughts here? I'll continue investigating and testing in the mean time. |
This sounds like a risky order of operations given data streams require timestamps. Perhaps use a temporary target_field for the date processing and only replace the timestamp after the parsing is a success. |
I believe this is exactly the explanation (and what I have here). It fails to complete the pipeline and so does not have a timestamp and thus fails ingest. Without the fix at #32458 this fails to be indexed (as expected), however if the
I agree, but I don't think there is any completely (atomically) correct approach; we need to have parsed the message to get the real @timestamp, but we need a fall back @timestamp if it fails. This means we always need to delete the @timestamp when we finally set the real value if it was available. If we accept this, then setting the |
Many thanks! I navigated to Stack Management -> Ingest Pipelines, then first duplicated PS: For others that may stumble here, tell CheckPoint to ingest historic data by adjusting the exporter customisation file:
|
Hi,
I'm trying to ingest CheckPoint native Syslog exports of security gateway (firewall) logs. My understanding is that integration was previously via CEF, which did not pass through sufficient detail, but that the native syslog format was merged here: Checkpoint Syslog Filebeat module by P1llus · Pull Request #17682 · elastic/beats · GitHub
We had the following problem with CheckPoint R81 and continue to experience the same problem with the latest generally recommended version R81.10. We have configured the CheckPoint log exporter via SmartConsole, as follows:
Format is set as standard 'Syslog' format, which should include all the additional CheckPoint fields:
The problem we experiencing is that nothing is actually ingested, we receive the following error:
The input pipeline was automatically configured when we added the Check Point module to an Elastic Agent via Fleet. This input pipeline appears to refer to fields which Check Point don't appear to generate:
CheckPoint documentation for the description of fields in Check Point Logs does not include '@timestamp' or 'timestamp':
https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk144192
For confirmed bugs, please report:
The text was updated successfully, but these errors were encountered: