-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Redis and Kafka outputs to the full configs #1690
Conversation
#required_acks: 0 | ||
|
||
# The number of seconds to wait for new events between two producer API calls. | ||
#flush_interval: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not related to this PR, but for durations in my opinion we should always include the unit, means this would be 1s (@urso).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Be super carefull about just changing durations in config files without testing. Quite some code has not been adjusted to time.Duration support in ucfg.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
flush_interval uses time.Duration and can be changes to 1s
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was hoping so as this is a new implementation :-)
LGTM. Some small comments added. @urso Could you also have a quick look, as you know more about the Redis / Kafka configs. |
|
||
# The number of times to retry publishing an event after a publishing failure. | ||
# After the specified number of retries, the events are typically dropped. | ||
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some Beats, such as Filebeat, ignore the max_retries setting and retry until all events are published.
^^ Is this config file only for Filebeat? If so, can then this section can be deleted? This might remove some confusion when working between different beats.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JarenGlover The problem here is that this general part is coming from the libbeat config file, means it is generated. Like this we make sure we don't have to keep multiple files up-to-date with the config option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ruflin aha .. makes sense. thx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JarenGlover There are also other options which could need beat specific optimizations. Perhaps we add in the future some logic to the generation to accomodate this. Thanks for the inputs.
@ruflin @JarenGlover Thanks for the reviews. I added a commit with the editing you suggested, please have a look. |
Part of #1417. |
btw. to make it easier to figure the outputs options I did put them into config.go for each output module. All network based outputs have |
|
||
# Sets the output compression codec. Must be one of none, snappy and gzip. The | ||
# default is snappy. | ||
#compression: snappy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default is gzip. You have to jump some (small) hoops to get snappy support in your kafka brokers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, it was wrong in the docs then, I'll fix it there as well.
f6c5fd5
to
a13e3f7
Compare
a13e3f7
to
6a2de0d
Compare
Follow up from elastic#1690. This also calls `make collect` and `make update` again to make sure all files are up to date.
Follow up from #1690. This also calls `make collect` and `make update` again to make sure all files are up to date.
No description provided.