-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: RunOnce / Serverless Telegraf #7408
Conversation
As per PR feedback, it is preferred not to pass through trailing boolean parameters and instead isolate the behaviour for each Run and RunOnce
if a.Config.Agent.RunOnce { | ||
err := a.gatherOnce(acc, input, interval) | ||
if err != nil { | ||
acc.AddError(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might move error handling inside of gatherOnce. It won't affect existing use of it and it'll clean up both callers.
return err | ||
} | ||
|
||
inputC := make(chan telegraf.Metric, 100) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'll want massive buffers here, and you're going to have a hard limit on how many metrics you can process this way. eg: a file input with 2 million lines is not going to work; it's going to fill this channel up before the rest of the chain can run. Since you're running the pieces one at a time. the n
th + 1 write over n
buffer size is going to block forever.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like this, and it will work as is for small test inputs, but if you test with an input that loads from a file with >= 101 lines; you'll find it hangs. Upping the channel buffer size just moves the max-metrics-generated problem further out. This is definitely an issue with a batch iteration like this; it's going to limit the max size of data an input can process. a more robust setup would be to build the entire chain backwards, streaming data all the way through the pipe, run the input collection once, then close the channels as they empty. This will probably require changes to the running(input|processor|aggregator|output) plugin wrappers so that as channels close they gracefully shut themselves down.
Happy to make changes, just keep me updated @danielnelson . Thank you :) |
See #7474 |
Follow-up to #6537
I can reduce some of the duplication between
Run
andRunOnce
, but I don't want to put too much time into this unless it's closer to what was expected.Required for all PRs: