Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[receiver/elasticsearch]: add flush time metric on index level #14924

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions .chloggen/elasticsearch-flush-time.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: enhancement
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we were already collecting this but not emitting it, and because there is already a metric for this in the data model, I think this should be considered a bug fix.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was omitted on purpose by me - I reused the operation attribute when adding search metrics, so it's natural that all unused operations type are collected, but not emitted.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, but this PR does not add a new metric. It adds a new data point to an existing metric. The result is that the existing metric will have a different "total" than before.

If we think adding this new data point gives us an accurate total, then this is a bug fix. Otherwise, if we think the "total" is still only part of the picture, then we could look to complete the total by adding additional data points.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To sum up: do you think that we should add all missing data points at once?
For example, #14871 is a very similar pull request to this one. If we want to add missing data points at once, I will close both PRs and send another one, that adds all missing data points.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there are multiple missing data points for the same metric, I think it's appropriate to add them in one PR.


# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: elasticsearchreceiver

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: add flush time metric on index level

# One or more tracking issues related to the change
issues: [14635]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:
5 changes: 4 additions & 1 deletion receiver/elasticsearchreceiver/scraper.go
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@ func (r *elasticsearchScraper) scrapeIndicesMetrics(ctx context.Context, now pco
indexStats, err := r.client.IndexStats(ctx, r.cfg.Indices)

if err != nil {
errs.AddPartial(4, err)
errs.AddPartial(5, err)
return
}

Expand All @@ -359,6 +359,9 @@ func (r *elasticsearchScraper) scrapeOneIndexMetrics(now pcommon.Timestamp, name
r.mb.RecordElasticsearchIndexOperationsTimeDataPoint(
now, stats.Total.SearchOperations.QueryTimeInMs, metadata.AttributeOperationQuery, metadata.AttributeIndexAggregationTypeTotal,
)
r.mb.RecordElasticsearchIndexOperationsTimeDataPoint(
now, stats.Total.FlushOperations.TotalTimeInMs, metadata.AttributeOperationFlush, metadata.AttributeIndexAggregationTypeTotal,
)

r.mb.EmitForResource(metadata.WithElasticsearchIndexName(name), metadata.WithElasticsearchClusterName(r.clusterName))
}
Original file line number Diff line number Diff line change
Expand Up @@ -2297,6 +2297,25 @@
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
},
{
"asInt": "192",
"attributes": [
{
"key": "operation",
"value": {
"stringValue": "flush"
}
},
{
"key": "aggregation",
"value": {
"stringValue": "total"
}
}
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
}
]
},
Expand Down Expand Up @@ -2423,6 +2442,25 @@
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
},
{
"asInt": "192",
"attributes": [
{
"key": "operation",
"value": {
"stringValue": "flush"
}
},
{
"key": "aggregation",
"value": {
"stringValue": "total"
}
}
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
}
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2490,6 +2490,25 @@
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
},
{
"asInt": "192",
"attributes": [
{
"key": "operation",
"value": {
"stringValue": "flush"
}
},
{
"key": "aggregation",
"value": {
"stringValue": "total"
}
}
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
}
]
},
Expand Down Expand Up @@ -2616,6 +2635,25 @@
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
},
{
"asInt": "192",
"attributes": [
{
"key": "operation",
"value": {
"stringValue": "flush"
}
},
{
"key": "aggregation",
"value": {
"stringValue": "total"
}
}
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
}
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -306,6 +306,25 @@
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
},
{
"asInt": "192",
"attributes": [
{
"key": "operation",
"value": {
"stringValue": "flush"
}
},
{
"key": "aggregation",
"value": {
"stringValue": "total"
}
}
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
}
]
},
Expand Down Expand Up @@ -432,6 +451,25 @@
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
},
{
"asInt": "192",
"attributes": [
{
"key": "operation",
"value": {
"stringValue": "flush"
}
},
{
"key": "aggregation",
"value": {
"stringValue": "total"
}
}
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
}
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5295,6 +5295,12 @@
"value": {
"stringValue": ".geoip_databases"
}
},
{
"key": "elasticsearch.cluster.name",
"value": {
"stringValue": "docker-cluster"
}
}
]
},
Expand Down Expand Up @@ -5394,6 +5400,25 @@
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
},
{
"asInt": "192",
"attributes": [
{
"key": "operation",
"value": {
"stringValue": "flush"
}
},
{
"key": "aggregation",
"value": {
"stringValue": "total"
}
}
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
}
]
},
Expand All @@ -5415,6 +5440,12 @@
"value": {
"stringValue": "_all"
}
},
{
"key": "elasticsearch.cluster.name",
"value": {
"stringValue": "docker-cluster"
}
}
]
},
Expand Down Expand Up @@ -5514,6 +5545,25 @@
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
},
{
"asInt": "192",
"attributes": [
{
"key": "operation",
"value": {
"stringValue": "flush"
}
},
{
"key": "aggregation",
"value": {
"stringValue": "total"
}
}
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
}
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4076,6 +4076,12 @@
"value": {
"stringValue": ".geoip_databases"
}
},
{
"key": "elasticsearch.cluster.name",
"value": {
"stringValue": "docker-cluster"
}
}
]
},
Expand Down Expand Up @@ -4175,6 +4181,25 @@
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
},
{
"asInt": "192",
"attributes": [
{
"key": "operation",
"value": {
"stringValue": "flush"
}
},
{
"key": "aggregation",
"value": {
"stringValue": "total"
}
}
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
}
]
},
Expand All @@ -4196,6 +4221,12 @@
"value": {
"stringValue": "_all"
}
},
{
"key": "elasticsearch.cluster.name",
"value": {
"stringValue": "docker-cluster"
}
}
]
},
Expand Down Expand Up @@ -4295,6 +4326,25 @@
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
},
{
"asInt": "192",
"attributes": [
{
"key": "operation",
"value": {
"stringValue": "flush"
}
},
{
"key": "aggregation",
"value": {
"stringValue": "total"
}
}
],
"startTimeUnixNano": "1661811689941624000",
"timeUnixNano": "1661811689943245000"
}
]
},
Expand Down