Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tail a File Wizard #5974

Closed
11 of 15 tasks
Bargs opened this issue Jan 21, 2016 · 7 comments
Closed
11 of 15 tasks

Tail a File Wizard #5974

Bargs opened this issue Jan 21, 2016 · 7 comments
Labels
Feature:Add Data Add Data and sample data feature on Home release_note:enhancement

Comments

@Bargs
Copy link
Contributor

Bargs commented Jan 21, 2016

Overview

Getting data into Elasticsearch isn't always a straightforward process. Elasticsearch gives us APIs to index documents, and it's up to us to figure out how to use them. Tools like logstash can be helpful but they add complexity to the process by introducing their own setup and configuration. We'd like to provide a UI in Kibana that streamlines this process for users, walking them through the steps required to get data into Elasticsearch, while reducing the friction that comes from having to set up external ingestion tools.

Features and design described below are still in flux and subject to change.

Proposals and Mockups

Adding data from a file

Getting unstructured or JSON data from a text file into Elasticsearch is the most common use case, and where we'll focus our efforts to begin with. We'll help the user configure any data processing they might need to do by using the upcoming ingest node feature in Elasticsearch and we'll get them set up with a Kibana index pattern so they're reading to start playing with their data as soon as it's in ES.

Step 1 - Getting sample log lines

The first step in the wizard will provide the user with a box to paste in some sample log lines from their file. We'll use these samples in the following steps. If the samples are raw text (the expected use case), we'll wrap those lines in a JSON object that looks similar to a document sent by filebeat, without the extra metadata. We can provide some helpful text here explaining what filebeat is and why the logs are wrapped this way. If the user wants to take advantage of the extra metadata provided by filebeat, they can look up what's fields are available via a link to the filebeat docs and craft a JSON document in this text box as they expect it to come out of filebeat.

 {message: <sample value>}

If a user pastes in JSON, we'll assume they know what they're doing and that the samples are representative of the exact documents they'll be sending to ES.

File Step 1

Step 2 - Parsing the samples and building a pipeline

This step is all about giving the user a way to easily process their data before indexing it in Elasticsearch. Traditionally this would be done with a tool like logstash, but we can make it easier by helping the user set up a new Elasticsearch ingestion pipeline. Using the sample data the user provided, we'll give them the ability to interactively build a pipeline that will turn their raw data into useful Elasticsearch documents.

File Step 2

Step 3 - Creating a Kibana Index Pattern

Once the ingest pipeline is complete, we'll know what the final documents will look like in Elasticsearch. We'll use this information to help the user create a Kibana index pattern for their soon-to-be populated indices. The user will be able to customize the index pattern name, tell us whether the data contains time based events, and if so which field represents the time of each event. We'll attempt to detect the type of each field, but provide the user with the ability to overwrite those defaults.

File Step 3

Step 4 - Installing Filebeat

This step isn't strictly required, if the user has some other means for sending their data to ES, we'll provide them with the url they need to hit. However, most users will want an easy way to tail a file and send data to ES without writing their own scripts. Enter Filebeat. We'll give the user helpful advice on installing and setting up filebeat to send data from their chosen file through the ingest pipeline they just set up. This will probably start out as just some descriptive text and links to filebeat docs, but it could include some more intelligent features down the road, as depicted in the mockup.

File Step 4

Current Tasks

Must have

Nice to have

  • Persistence of user's work in localstorage or somewhere else
  • Ability to add new wizards with plugins
  • Ability to add UI for new ingest pipeline processors with plugins
@Bargs Bargs added the discuss label Jan 21, 2016
@Bargs Bargs self-assigned this Jan 21, 2016
@jccq
Copy link

jccq commented Jan 22, 2016

Just wanted to say great great stuff :)

Certainly for stable, large scale ingestions you wont be able to replace .. "the way its supposed to be done". but. quick loading of (even simple) data files is highly needed

@chakrayadavalli
Copy link

+:100: @Bargs Fantastic job. I was about to file a ticket for the same! I am not great with Angular but I am pretty familiar with logstash et al tools. Let me know if you need an extra hand for testing out this feature.

@Bargs
Copy link
Contributor Author

Bargs commented Feb 8, 2016

Decision was made in the last ingest meeting that for now we won't support sending structured JSON logs via Filebeat since Filebeat itself doesn't support it (ongoing discussion here), and we don't have a JSON processor. We should probably add a note on the Paste step giving the user a heads up about this.

@pemontto
Copy link

pemontto commented Feb 9, 2016

Massive 👍

@Bargs
Copy link
Contributor Author

Bargs commented Feb 12, 2016

Zoom Feedback:

  • Show JSON on pipeline creation screen as the pipeline api will see it
  • Better descriptive text for pattern review step and others
  • In the wizard steps, add an "Other" option with links to explain tailing a file isn't the only thing you can do?
  • Pipeline templates in additional phases? e.g. a pre-made pipeline for parsing apache logs

This was referenced Mar 15, 2016
@Bargs Bargs added the Feature:Add Data Add Data and sample data feature on Home label Mar 23, 2016
@Bargs Bargs changed the title Add Data UI Tail a File Wizard Jun 29, 2016
@Bargs Bargs removed their assignment Nov 30, 2016
@epixa
Copy link
Contributor

epixa commented Dec 26, 2016

While this would be a nice feature to have, we've put our add data efforts on hold for the time being to focus on more impactful enhancements for Kibana.

@chrisronline
Copy link
Contributor

Closing this out. This idea is still valuable, but we've started moving in a different direction, specifically with a new home page

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:Add Data Add Data and sample data feature on Home release_note:enhancement
Projects
None yet
Development

No branches or pull requests

7 participants