-
Notifications
You must be signed in to change notification settings - Fork 527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Marks: First measurement decides if field is mapped to long or float #708
Comments
@roncohen what do you think of making this specific to
Users could extend fields.yml (or the new Note that |
some form of elastic/beats#6691 is required for that approach. |
sounds good to me @graphaelli. Good job on the beats PR |
In #828 (comment) @felixbarny brings up a good point:
Not my area of expertise here, but https://www.w3.org/TR/navigation-timing/#performancetiming seems to indicate all of these are times and not durations - should we map these to |
ping @jahtalab regarding the suggestion of treating mark values as timestamps. |
@simitt , We convert navigation timings to duration since it is more useful that way. Of course the UI or the user can make this conversion but it's easier to query and compare transactions with durations since the exact timestamp is not relevant for these comparisons. We can treat them as |
to elaborate a bit: we have them as relative time offsets, relative to when the navigation was started (ish). It's useful to be able to aggregate across them e.g. show me the average time till the |
Would it make sense to use the same storage type and unit as Also, I don't think resolution limits of browsers should be a deciding factor, since backend agents have uses for marks as well (e.g. I have an experimental branch that marks start and end of garbage collection). |
I've put #915 to address the immediate need. Namely, that specially stores Is that sufficient for this or do we want to make sure any number stored under Also, I agree with @beniwohli that it's confusing to store durations as microseconds but marks as milliseconds. So far this data has been opaque to the APM ui but in case we will rely on it soon, I agree we should change the description to microseconds. |
I vote to handle them exactly as transaction/span durations: agents send them as milliseconds with type |
I'm ok with apm-server converting the payload to microseconds! So the agent payload stays the same but it is stored in microseconds in ES and the UI will show them in milliseconds again! Pinging @elastic/apm-ui , since this change in the data (milliseconds to microseconds) affects the UI as well. |
@jahtalab When is this change due? In order for us to change it in the UI, we need to have it added in the next stack release. |
I was under the impression that marks aren't implemented in the UI yet? So we can decide on a unit/format somewhat independently of the UI team (assuming that stored as |
@beniwohli @jahtalab I was probably too quick to jump the shark; marks are not yet supported, so there's no issue with UI at the moment. Thought it was related something else than marks |
When sending up
marks
with transactions we currently use a dynamic mapping which lets Elasticsearch determine the mapping when it sees a new mark for the first time. Unfortunately, JSON only hasNumbers
, it doesn't distinguish between int/long and floats. That means that when we send up a new mark, even if it's measured as a float in the agent, it can happen that the first measurement just happens to be a number with no significant decimal digits.This will cause the field will be marked as a
long
in the Elasticsearch mapping. If we subsequently send up measurements that have significant decimal digits, they will all be truncated to integer. Since these are millisecond measurements, it might not matter too much, but we should at least make sure they are consistently mapped. Here's an example of a mapping that can happen today:In this example, the first value of
thing1
was1
. This caused the mapping to becomelong
. Subsequent values like1.2
,1.2333
will be truncated to1
.The text was updated successfully, but these errors were encountered: