-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tail a File Wizard #5974
Comments
Just wanted to say great great stuff :) Certainly for stable, large scale ingestions you wont be able to replace .. "the way its supposed to be done". but. quick loading of (even simple) data files is highly needed |
+:100: @Bargs Fantastic job. I was about to file a ticket for the same! I am not great with Angular but I am pretty familiar with logstash et al tools. Let me know if you need an extra hand for testing out this feature. |
Decision was made in the last ingest meeting that for now we won't support sending structured JSON logs via Filebeat since Filebeat itself doesn't support it (ongoing discussion here), and we don't have a JSON processor. We should probably add a note on the Paste step giving the user a heads up about this. |
Massive 👍 |
Zoom Feedback:
|
While this would be a nice feature to have, we've put our add data efforts on hold for the time being to focus on more impactful enhancements for Kibana. |
Closing this out. This idea is still valuable, but we've started moving in a different direction, specifically with a new home page |
Overview
Getting data into Elasticsearch isn't always a straightforward process. Elasticsearch gives us APIs to index documents, and it's up to us to figure out how to use them. Tools like logstash can be helpful but they add complexity to the process by introducing their own setup and configuration. We'd like to provide a UI in Kibana that streamlines this process for users, walking them through the steps required to get data into Elasticsearch, while reducing the friction that comes from having to set up external ingestion tools.
Features and design described below are still in flux and subject to change.
Proposals and Mockups
Adding data from a file
Getting unstructured or JSON data from a text file into Elasticsearch is the most common use case, and where we'll focus our efforts to begin with. We'll help the user configure any data processing they might need to do by using the upcoming ingest node feature in Elasticsearch and we'll get them set up with a Kibana index pattern so they're reading to start playing with their data as soon as it's in ES.
Step 1 - Getting sample log lines
The first step in the wizard will provide the user with a box to paste in some sample log lines from their file. We'll use these samples in the following steps. If the samples are raw text (the expected use case), we'll wrap those lines in a JSON object that looks similar to a document sent by filebeat, without the extra metadata. We can provide some helpful text here explaining what filebeat is and why the logs are wrapped this way. If the user wants to take advantage of the extra metadata provided by filebeat, they can look up what's fields are available via a link to the filebeat docs and craft a JSON document in this text box as they expect it to come out of filebeat.
If a user pastes in JSON, we'll assume they know what they're doing and that the samples are representative of the exact documents they'll be sending to ES.
Step 2 - Parsing the samples and building a pipeline
This step is all about giving the user a way to easily process their data before indexing it in Elasticsearch. Traditionally this would be done with a tool like logstash, but we can make it easier by helping the user set up a new Elasticsearch ingestion pipeline. Using the sample data the user provided, we'll give them the ability to interactively build a pipeline that will turn their raw data into useful Elasticsearch documents.
Step 3 - Creating a Kibana Index Pattern
Once the ingest pipeline is complete, we'll know what the final documents will look like in Elasticsearch. We'll use this information to help the user create a Kibana index pattern for their soon-to-be populated indices. The user will be able to customize the index pattern name, tell us whether the data contains time based events, and if so which field represents the time of each event. We'll attempt to detect the type of each field, but provide the user with the ability to overwrite those defaults.
Step 4 - Installing Filebeat
This step isn't strictly required, if the user has some other means for sending their data to ES, we'll provide them with the url they need to hit. However, most users will want an easy way to tail a file and send data to ES without writing their own scripts. Enter Filebeat. We'll give the user helpful advice on installing and setting up filebeat to send data from their chosen file through the ingest pipeline they just set up. This will probably start out as just some descriptive text and links to filebeat docs, but it could include some more intelligent features down the road, as depicted in the mockup.
Current Tasks
Must have
Nice to have
The text was updated successfully, but these errors were encountered: