Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix fields handling to properly batch HTTP request payloads #54

Merged
merged 1 commit into from
Oct 16, 2019

Conversation

rvmiller89
Copy link
Contributor

@rvmiller89 rvmiller89 commented Oct 15, 2019

Noticed by Paul, after adding the additional fields to _sumo_metadata, we were not properly batching the HTTP request payloads in order to set X-Sumo-Fields.

This PR adds handling for fields from _sumo_metadata and adds test cases for:

  1. Batching without header/field overrides
  2. Batching with header overrides
  3. Batching with field overrides

Some of this may conflict with #53 but we can reconcile the differences.

]
end

def dump_collected_fields(log_fields)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function will likely come back after we do #53

fields = sumo_metadata['fields'] || ""
fields = extract_placeholders(fields, chunk) unless fields.nil?

"#{source_name}:#{source_category}:#{source_host}:#{fields}"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not addressed by this PR, but something we should fix: having a colon (:) in any metadata field or value would currently break this

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point - I believe you can use ruby hashes as keys? maybe that's something we can look into though it does make it pretty complex

Copy link
Contributor

@samjsong samjsong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM but I guess this assumes the string containing fields is in some well defined order, otherwise our batches will be smaller than intended though I guess it doesn't make a difference from the user's perspective

Copy link
Contributor

@frankreno frankreno left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Something we should put into the backlog potentially is some basic inspecting of the fields to ensure they meet requirements. E.g. max of 20, etc. Not the focus of this PR just bringing it up. Agree with the Colon. I just did a check on my clusters and did not see anything but wondering if its worth a proper fix?

@rvmiller89
Copy link
Contributor Author

Yeah these are all great points, I am thinking start with this for now, and revisit our HTTP batching to account for these edge cases.

@rvmiller89 rvmiller89 merged commit 7fc9169 into master Oct 16, 2019
@rvmiller89 rvmiller89 deleted the rmiller-fluentd-output-fix-metadata-key branch October 16, 2019 18:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants