-
Notifications
You must be signed in to change notification settings - Fork 31
Conversation
model/span.go
Outdated
@@ -59,3 +64,14 @@ func NewFlushMarker() Span { | |||
func (s *Span) End() int64 { | |||
return s.Start + s.Duration | |||
} | |||
|
|||
// Weight returns the weigth of the span as defined for sampling, i.e. the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: weight
model/span.go
Outdated
// Weight returns the weigth of the span as defined for sampling, i.e. the | ||
// inverse of the sampling rate. | ||
func (s *Span) Weight() float64 { | ||
sampleRate, found := s.Metrics[SpanSampleRateMetricKey] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: but for consistency we use ok
rather than found
for maps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having an element in a map is not "ok" (or not), but as you wish.
agent/concentrator.go
Outdated
} else { | ||
b.HandleSpan(s, t.Env, c.aggregators, nil) | ||
b.HandleSpan(s, t.Env, c.aggregators, t.Root, nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the API if we just use the weight of the root span can we consider passing just the weight rather than a pointer to the full span? Would avoid having to check for nil pointers downstream etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure.
The root span of a trace can contain a _sample_rate value if the client sampled spans. When this is the case, we scale stats (hit and error counts, and durations) by a factor, or weight, equal to 1.0 / _sample_rate. This also means that we have to use floating point values to accumulate values in StatsRawBucket.
161ea07
to
be0fcd3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
But as side note, we'll need to add a shit ton of documentation to make user understand that once they use that the stats will be really weird for certain cases.
duration int64 | ||
hits float64 | ||
errors float64 | ||
duration float64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are ns
here, we can stick to in64
without any significant lose of precision.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or we can use float64
without losing precision, since it will be exported to a float64
value anyway (see next comment).
hits int64 | ||
errors int64 | ||
duration int64 | ||
hits float64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the move from int64
to float64
going to impact anything in the serialization/payload to the API? Is the API compatible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this in an internal format only that we then export to the API payload format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These counters are for internal computations in StatsRawBucket
. (*StatsRawBucket).Export()
returns a StatsBucket
which already stores data using Count
structures which use float64
values.
The root span of a trace can contain a _sample_rate value if the client
sampled spans. When this is the case, we scale stats (hit and error
counts, and durations) by a factor, or weight, equal to
1.0 / _sample_rate
.This also means that we have to use floating point values to accumulate
values for
StatsRawBucket
.