Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: RunOnce / Serverless Telegraf #7408

Closed
wants to merge 2 commits into from
Closed

feat: RunOnce / Serverless Telegraf #7408

wants to merge 2 commits into from

Conversation

rawkode
Copy link
Contributor

@rawkode rawkode commented Apr 24, 2020

Follow-up to #6537

I can reduce some of the duplication between Run and RunOnce, but I don't want to put too much time into this unless it's closer to what was expected.

Required for all PRs:

  • Signed CLA.
  • Associated README.md updated.
  • Has appropriate unit tests.

rawkode and others added 2 commits April 24, 2020 16:52
As per PR feedback, it is preferred not to pass through trailing boolean
parameters and instead isolate the behaviour for each Run and RunOnce
@rawkode rawkode changed the title Feature/serverless telegraf feat: RunOnce / Serverless Telegraf Apr 24, 2020
if a.Config.Agent.RunOnce {
err := a.gatherOnce(acc, input, interval)
if err != nil {
acc.AddError(err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might move error handling inside of gatherOnce. It won't affect existing use of it and it'll clean up both callers.

return err
}

inputC := make(chan telegraf.Metric, 100)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'll want massive buffers here, and you're going to have a hard limit on how many metrics you can process this way. eg: a file input with 2 million lines is not going to work; it's going to fill this channel up before the rest of the chain can run. Since you're running the pieces one at a time. the nth + 1 write over n buffer size is going to block forever.

Copy link
Contributor

@ssoroka ssoroka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this, and it will work as is for small test inputs, but if you test with an input that loads from a file with >= 101 lines; you'll find it hangs. Upping the channel buffer size just moves the max-metrics-generated problem further out. This is definitely an issue with a batch iteration like this; it's going to limit the max size of data an input can process. a more robust setup would be to build the entire chain backwards, streaming data all the way through the pipe, run the input collection once, then close the channels as they empty. This will probably require changes to the running(input|processor|aggregator|output) plugin wrappers so that as channels close they gracefully shut themselves down.

@danielnelson
Copy link
Contributor

@rawkode There are a couple other minor things to do in addition to what @ssoroka found and I have a PR open that is going to cause further work. I'll sort out the remaining issues once #7390 is merged.

@rawkode
Copy link
Contributor Author

rawkode commented May 5, 2020

Happy to make changes, just keep me updated @danielnelson .

Thank you :)

@danielnelson
Copy link
Contributor

See #7474

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants