-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timelion search ignores time range when choosing indices #10475
Comments
Thanks @thomasneirynck, much appreciated. |
any help there?i'm having the same issue |
Same issue. Can't use the tool at all. |
Any news? Same issue for me too |
Having a similar issue here. It's impossible for me to put timelion queries on a dashboard because it's incredibly easy to generate 1000s of search requests with some fairly simple (time-constrained) queries. |
I've had to remove these from our dashboards as well. We're looking at a timeframe that should only be searching 24 shards, and yet timelion is hitting 1004 shards because it's searching our entire logging history. It's sad because we really did like the graphs that it creates. |
+1 |
1 similar comment
+1 |
So when will this be fixed? |
Unless I'm misunderstanding, the resolution of the similar issue I reported for Kibana (#14633) suggests that this should become less of an issue in ES 5.6. |
As @pjcard already posted, as of ES 5.6 we don't do index pattern expansion on times, since ES does the linked optimization now internally. So this issue is no longer relevant, since all visualizations now just query an index pattern and let ES take care of filtering out inappropriate time range shards. I will close this, but please feel free to leave a comment, if you've still experience issues with Kibana 5.6 upwards. Thanks for your patience and waiting for ES providing the proper solution for that issue. |
One thing that didn't occur to me when I made that comment: how does this relate to queue sizes? One issue we have specifically with this is that it meant we were getting failures due to the query hitting shards outside of the time range we specified. So, for instance, if we have one index per day, three shards per index and searched over a week's worth of data, we shouldn't hit any queue limits, but in fact we did hit them because of all the indices that were being queried needlessly. We did up our queue size, against recommendation, because there appeared to be nothing else we could do: Any feedback you have on this @timroes would be greatly appreciated. If hitting these pointless indices is still filling up queues, then our queries are still being needlessly limited by data we're purposely trying to exclude, and this bug would still need addressing. |
Please feel free to leave the link to the discuss post here, for cross reference to other users might be able to read up on that topic. |
@bleskes I'm not sure what you mean, it must either still cause shard failures, in which case the bug is still valid and should be reopened, or it will not, in which case there is nothing further to discuss. Edit: I've just checked, and we're still on 5.5.3, so I can't help retest the scenario. It should be fairly trivial, though. |
@pjcard The fast shard pre-filtering (see elastic/elasticsearch#25658) does not use the search queue and so are not subject to rejection. They also don't fill the queues and so cause other searches to be rejected. On top of that, elastic/elasticsearch#25632 limits the number of concurrent shard-level search request that can be sent per search, so that a single search can't dominate the cluster. |
@clintongormley Ah, thank you for the clarification, much appreciated. I will keep pushing my guys to upgrade then, and it sounds like @timroes was spot on in resolving it. |
For future readers - the Elasticsearch limit that causes the error mentioned in this ticket has been removed in 5.4.0 due to the changes @clintongormley mentioned: elastic/elasticsearch#24012 |
Kibana version:
Version: 5.1.1 Build 14566, Commit SHA 85a6f4d
Elasticsearch version:
5.1.1
Browser version:
Chrome 55.0.2883.87
Browser OS version:
Windows 10
Description of the problem including expected versus actual behavior:
When using timelion all indices matching the expression are queried irrespective of the time range setting.
This is an issue because:
Steps to reproduce:
.es( * )
My assumption would be that timelion is not using the field stats api described here:
#4342
Errors in browser console (if relevant):
An image showing an error produced from querying far more shards than should have been for the short, 15 minute, interval.
Provide logs and/or server output (if relevant):
The logs for the above request, note the number of shards hit and the value of the date histogram's
extended bounds.
Please ignore the indexing strategy, it only serves to illustrate the more general issue.
index_search_slowlog.log.txt
This issue was previously reported here:
elastic/timelion#195
The text was updated successfully, but these errors were encountered: