Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Redis and Kafka outputs to the full configs #1690

Merged
merged 3 commits into from
May 24, 2016

Conversation

tsg
Copy link
Contributor

@tsg tsg commented May 20, 2016

No description provided.

@tsg tsg added the review label May 20, 2016
@tsg tsg mentioned this pull request May 22, 2016
14 tasks
#required_acks: 0

# The number of seconds to wait for new events between two producer API calls.
#flush_interval: 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not related to this PR, but for durations in my opinion we should always include the unit, means this would be 1s (@urso).

Copy link

@urso urso May 23, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Be super carefull about just changing durations in config files without testing. Quite some code has not been adjusted to time.Duration support in ucfg.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flush_interval uses time.Duration and can be changes to 1s.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was hoping so as this is a new implementation :-)

@ruflin
Copy link
Contributor

ruflin commented May 23, 2016

LGTM. Some small comments added. @urso Could you also have a quick look, as you know more about the Redis / Kafka configs.


# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some Beats, such as Filebeat, ignore the max_retries setting and retry until  all events are published. 

^^ Is this config file only for Filebeat? If so, can then this section can be deleted? This might remove some confusion when working between different beats.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JarenGlover The problem here is that this general part is coming from the libbeat config file, means it is generated. Like this we make sure we don't have to keep multiple files up-to-date with the config option.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ruflin aha .. makes sense. thx

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JarenGlover There are also other options which could need beat specific optimizations. Perhaps we add in the future some logic to the generation to accomodate this. Thanks for the inputs.

@tsg
Copy link
Contributor Author

tsg commented May 23, 2016

@ruflin @JarenGlover Thanks for the reviews. I added a commit with the editing you suggested, please have a look.

@tsg
Copy link
Contributor Author

tsg commented May 23, 2016

Part of #1417.

@urso
Copy link

urso commented May 23, 2016

btw. to make it easier to figure the outputs options I did put them into config.go for each output module.

All network based outputs have bulk_max_size and flush_interval config options in addition: https://github.com/elastic/beats/blob/master/libbeat/publisher/output.go#L20


# Sets the output compression codec. Must be one of none, snappy and gzip. The
# default is snappy.
#compression: snappy
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default is gzip. You have to jump some (small) hoops to get snappy support in your kafka brokers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, it was wrong in the docs then, I'll fix it there as well.

@tsg tsg force-pushed the add_kafka_redis_to_config branch from f6c5fd5 to a13e3f7 Compare May 23, 2016 20:09
@tsg tsg force-pushed the add_kafka_redis_to_config branch from a13e3f7 to 6a2de0d Compare May 23, 2016 20:39
@ruflin ruflin merged commit 627c05e into elastic:master May 24, 2016
tsg pushed a commit to tsg/beats that referenced this pull request May 24, 2016
Follow up from elastic#1690. This also calls `make collect` and `make update`
again to make sure all files are up to date.
ruflin pushed a commit that referenced this pull request May 24, 2016
Follow up from #1690. This also calls `make collect` and `make update`
again to make sure all files are up to date.
@tsg tsg deleted the add_kafka_redis_to_config branch August 25, 2016 10:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants