Skip to content

Commit

Permalink
Set shard count limit to unlimited (#24012)
Browse files Browse the repository at this point in the history
Now that we have incremental reduce functions for topN and aggregations
we can set the default for `action.search.shard_count.limit` to unlimited.
This still allows users to restrict these settings while by default we executed
across all shards matching the search requests index pattern.
  • Loading branch information
s1monw committed Apr 10, 2017
1 parent 66ba2ea commit 63dfcc5
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 7 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ public class TransportSearchAction extends HandledTransportAction<SearchRequest,

/** The maximum number of shards for a single search request. */
public static final Setting<Long> SHARD_COUNT_LIMIT_SETTING = Setting.longSetting(
"action.search.shard_count.limit", 1000L, 1L, Property.Dynamic, Property.NodeScope);
"action.search.shard_count.limit", Long.MAX_VALUE, 1L, Property.Dynamic, Property.NodeScope);

private final ClusterService clusterService;
private final SearchTransportService searchTransportService;
Expand Down
13 changes: 7 additions & 6 deletions docs/reference/search/search.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -60,9 +60,10 @@ GET /_search?q=tag:wow
// CONSOLE
// TEST[setup:twitter]

By default elasticsearch rejects search requests that would query more than
1000 shards. The reason is that such large numbers of shards make the job of
the coordinating node very CPU and memory intensive. It is usually a better
idea to organize data in such a way that there are fewer larger shards. In
case you would like to bypass this limit, which is discouraged, you can update
the `action.search.shard_count.limit` cluster setting to a greater value.
By default elasticsearch doesn't reject any search requests based on the number
of shards the request hits. While elasticsearch will optimize the search execution
on the coordinating node a large number of shards can have a significant impact
CPU and memory wise. It is usually a better idea to organize data in such a way
that there are fewer larger shards. In case you would like to configure a soft
limit, you can update the `action.search.shard_count.limit` cluster setting in order
to reject search requests that hit too many shards.

0 comments on commit 63dfcc5

Please sign in to comment.