-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix fields handling to properly batch HTTP request payloads #54
Conversation
] | ||
end | ||
|
||
def dump_collected_fields(log_fields) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function will likely come back after we do #53
fields = sumo_metadata['fields'] || "" | ||
fields = extract_placeholders(fields, chunk) unless fields.nil? | ||
|
||
"#{source_name}:#{source_category}:#{source_host}:#{fields}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not addressed by this PR, but something we should fix: having a colon (:
) in any metadata field or value would currently break this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point - I believe you can use ruby hashes as keys? maybe that's something we can look into though it does make it pretty complex
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM but I guess this assumes the string containing fields
is in some well defined order, otherwise our batches will be smaller than intended though I guess it doesn't make a difference from the user's perspective
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Something we should put into the backlog potentially is some basic inspecting of the fields to ensure they meet requirements. E.g. max of 20, etc. Not the focus of this PR just bringing it up. Agree with the Colon. I just did a check on my clusters and did not see anything but wondering if its worth a proper fix?
Yeah these are all great points, I am thinking start with this for now, and revisit our HTTP batching to account for these edge cases. |
Noticed by Paul, after adding the additional
fields
to_sumo_metadata
, we were not properly batching the HTTP request payloads in order to setX-Sumo-Fields
.This PR adds handling for
fields
from_sumo_metadata
and adds test cases for:Some of this may conflict with #53 but we can reconcile the differences.